[jira] [Commented] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120312#comment-15120312
 ] 

Jake Low commented on HDFS-9714:


Along similar lines, when trying to {{rename2}} with {{src}} being a file and 
{{dst}} being a directory, pre-2.7.0 the Namenode would always return an 
{{IOException}} with the message {{"Source  and destination  must 
both be directories}}.

Following HDFS-7509, the Namenode only throws this exception when {{dst}} is 
_writable_. If {{dst}} is unwritable, an AccessControlException is thrown 
instead.

> Rename throws AccessControlException (not FileNotFoundException) when src 
> doesn't exist
> ---
>
> Key: HDFS-9714
> URL: https://issues.apache.org/jira/browse/HDFS-9714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0, 2.7.1, 2.7.2
>Reporter: Jake Low
>
> It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] 
> broke the semantics of the {{rename}} and {{rename2}} RPCs.
> Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that 
> was resolvable (i.e. each ancestor directory was executable to the user and 
> therefore could be traversed) but which itself did not exist, the Namenode 
> would reply with a {{FileNotFoundException}}.
> The refactoring that took place in HDFS-7509 to avoid duplicate path 
> resolutions at different phases of a rename operation had the side effect of 
> breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
> following:
> {noformat}
> org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
> Permission denied: user=nobody, access=WRITE, 
> inode="/foo":hdfs:supergroup:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {noformat}
> Note the {{hdfs:supergroup:drwxr-xr-x}} in the error string. {{/foo}} doesn't 
> exist, so it of course has no owner, group or mode bits. The information 
> shown above is actually the ownership and access rights of {{/}}, which would 
> be {{/foo}}'s parent if it existed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120515#comment-15120515
 ] 

GAO Rui commented on HDFS-9494:
---

Thanks for your comments [~jingzhao]. 

bq.  In DFSStripedOutputStream, we do not need to move all the data fields to 
the beginning of the class.
I move them to the beginning of the class body cause when doing the fields 
related coding, it had been a little trouble to find and edit(adding) fields to 
{{DFSStripedOutputStream}}. Would it be easier for future coding if we move the 
fields to the beginning?

bq. flushAllExecutor.shutdownNow() can be called in the finally section
bq. flushAllFuturesMap does not need to be a class level field. It can be a 
temporary variable defined and used in flushAllInternals.

That makes a lot of sense. I will move {{flushAllExecutor.shutdownNow()}} to 
finally section and make {{flushAllFuturesMap}} an local variable in the next 
patch. 






> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9712) libhdfs++: Reimplement Status object as a normal struct

2016-01-27 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9712:
--
Attachment: HDFS-9712.HDFS-8707.000.patch

> libhdfs++: Reimplement Status object as a normal struct
> ---
>
> Key: HDFS-9712
> URL: https://issues.apache.org/jira/browse/HDFS-9712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9712.HDFS-8707.000.patch
>
>
> hdfs::Status is doing all sorts of reinterpret casts on a block of memory 
> referenced by a char *.  Using a char *, casting to a wider type, and 
> dereferencing can cause fun alignment issues.
> As far as I can tell that data layout in status can be boiled down to:
> {code}
> class Status {
>   int code;
>   std::string msg;
> }
> {code}
> This avoids doing manual memcopies in the copy ctor and delete[]s in the 
> dtor.  It will also get rid of boilerplate null checks and casts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9677:

Priority: Minor  (was: Major)

> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9677:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.9.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

Thanks for the branch-2 patch, Mingliang. I've committed this to trunk and 
branch-2.

> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Fix For: 2.9.0
>
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120452#comment-15120452
 ] 

Mingliang Liu commented on HDFS-9677:
-

Thanks for the commit, [~jingzhao]!

> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120503#comment-15120503
 ] 

Hudson commented on HDFS-9677:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9195 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9195/])
HDFS-9677. Rename generationStampV1/generationStampV2 to (jing9: rev 
8a91109d16394310f2568717f103e6fff7cbddb0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSequentialBlockId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfLegacyGenerationStampsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfV1GenerationStampsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120527#comment-15120527
 ] 

Hadoop QA commented on HDFS-9711:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 47s {color} 
| {color:red} root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new + 739 
unchanged - 0 fixed = 740 total (was 739) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 38s 
{color} | {color:red} root-jdk1.7.0_91 with JDK v1.7.0_91 generated 1 new + 735 
unchanged - 0 fixed = 736 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} root: patch generated 0 new + 108 unchanged - 5 
fixed = 108 total (was 113) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Created] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)
Jake Low created HDFS-9714:
--

 Summary: Rename throws AccessControlException (not 
FileNotFoundException) when src doesn't exist
 Key: HDFS-9714
 URL: https://issues.apache.org/jira/browse/HDFS-9714
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.2, 2.7.1, 2.7.0
Reporter: Jake Low


It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{{
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

}}

Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't 
exist, so it of course has no owner, group or mode bits. The information shown 
above is actually the ownership and access rights of {{/}}, which would be 
{{/foo}}'s parent if it existed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9607) Advance Hadoop Architecture (AHA) - HDFS Update (write-in-place)

2016-01-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120280#comment-15120280
 ] 

Konstantin Shvachko commented on HDFS-9607:
---

>  Even though HDFS files stores bytes, from a users perspective, these bytes 
> could be finance data or personal contact

Dinesh, this is an application-level logic. HDFS has no knowledge of the 
semantics of the bytes it stores. It does not know how the data is structured, 
what are the delimiters, if data is compressed, encrypted, serialized, etc.
Just ask yourself how you would implement your own {{canWrite()}} method on the 
HDFS level.
User can do everything you are talking about by reading, and comparing stored 
data with new values. The only think missing is the ability to update bytes in 
an HDFS file. The positional write method I suggested 
[earlier|https://issues.apache.org/jira/browse/HDFS-9607?focusedCommentId=15088000=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15088000]

> Advance Hadoop Architecture (AHA) - HDFS Update (write-in-place)
> 
>
> Key: HDFS-9607
> URL: https://issues.apache.org/jira/browse/HDFS-9607
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Dinesh S. Atreya
>
> Link to Umbrella JIRA
> https://issues.apache.org/jira/browse/HADOOP-12620 
> Provide capability to carry out in-place writes/updates. Only writes in-place 
> are supported where the existing length does not change.
> For example, "Hello World" can be replaced by "Hello HDFS!"
> See 
> https://issues.apache.org/jira/browse/HADOOP-12620?focusedCommentId=15046300=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15046300
>  for more details.
> Currently following are supported.
> # Sequential writes
> # Append (HADOOP-1700, HDFS-265)
> # Snapshots (HDFS-2802)
> # Truncate (HDFS-3107)
> This JIRA is for random updates (write-in-place).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Low updated HDFS-9714:
---
Description: 
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{noformat}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

{noformat}

Note the {{hdfs:supergroup:drwxr-xr-x}} in the error string. {{/foo}} doesn't 
exist, so it of course has no owner, group or mode bits. The information shown 
above is actually the ownership and access rights of {{/}}, which would be 
{{/foo}}'s parent if it existed.

  was:
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{noformat}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 

[jira] [Commented] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120512#comment-15120512
 ] 

Jake Low commented on HDFS-9714:


I've just tested 2.6.3 and 2.5.2; this issue exists there as well. I was 
mistaken to think that HDFS-7509 introduced this behavior; it just touched the 
same code.

Nonetheless, I think this is probably a bug -- it makes no sense to return an 
{{AccessControlException}} claiming a file is unwritable when in fact it 
doesn't exist.


> Rename throws AccessControlException (not FileNotFoundException) when src 
> doesn't exist
> ---
>
> Key: HDFS-9714
> URL: https://issues.apache.org/jira/browse/HDFS-9714
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0, 2.7.1, 2.7.2
>Reporter: Jake Low
>
> It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] 
> broke the semantics of the {{rename}} and {{rename2}} RPCs.
> Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that 
> was resolvable (i.e. each ancestor directory was executable to the user and 
> therefore could be traversed) but which itself did not exist, the Namenode 
> would reply with a {{FileNotFoundException}}.
> The refactoring that took place in HDFS-7509 to avoid duplicate path 
> resolutions at different phases of a rename operation had the side effect of 
> breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
> following:
> {noformat}
> org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
> Permission denied: user=nobody, access=WRITE, 
> inode="/foo":hdfs:supergroup:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {noformat}
> Note the {{hdfs:supergroup:drwxr-xr-x}} in the error string. {{/foo}} doesn't 
> exist, so it of course has no owner, group or mode bits. The information 
> shown above is actually the ownership and access rights of {{/}}, which would 
> be {{/foo}}'s parent if it existed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120539#comment-15120539
 ] 

Hadoop QA commented on HDFS-9686:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} hadoop-hdfs-project: patch generated 0 new + 359 
unchanged - 11 fixed = 359 total (was 370) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 48s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 387m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests 

[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120540#comment-15120540
 ] 

GAO Rui commented on HDFS-9494:
---

BTW. will it reduce the resource cost to make 
{{flushAllExecutorCompletionService}} as a class level field, or it's better to 
make it as local variable in {{flushAllInternals()}} to limit unnecessary 
exposure as well as {{flushAllFuturesMap}}?

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Low updated HDFS-9714:
---
Description: 
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{noformat}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

{noformat}

Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't 
exist, so it of course has no owner, group or mode bits. The information shown 
above is actually the ownership and access rights of {{/}}, which would be 
{{/foo}}'s parent if it existed.

  was:
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{code}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 

[jira] [Updated] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist

2016-01-27 Thread Jake Low (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Low updated HDFS-9714:
---
Description: 
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{code}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

{code}

Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't 
exist, so it of course has no owner, group or mode bits. The information shown 
above is actually the ownership and access rights of {{/}}, which would be 
{{/foo}}'s parent if it existed.

  was:
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke 
the semantics of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was 
resolvable (i.e. each ancestor directory was executable to the user and 
therefore could be traversed) but which itself did not exist, the Namenode 
would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path 
resolutions at different phases of a rename operation had the side effect of 
breaking this behavior. In 2.7.0 and above, the Namenode instead raises the 
following:

{{
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: 
Permission denied: user=nobody, access=WRITE, 
inode="/foo":hdfs:supergroup:drwxr-xr-x
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
at 

[jira] [Updated] (HDFS-9712) libhdfs++: Reimplement Status object as a normal struct

2016-01-27 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9712:
--
Status: Patch Available  (was: Open)

> libhdfs++: Reimplement Status object as a normal struct
> ---
>
> Key: HDFS-9712
> URL: https://issues.apache.org/jira/browse/HDFS-9712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9712.HDFS-8707.000.patch
>
>
> hdfs::Status is doing all sorts of reinterpret casts on a block of memory 
> referenced by a char *.  Using a char *, casting to a wider type, and 
> dereferencing can cause fun alignment issues.
> As far as I can tell that data layout in status can be boiled down to:
> {code}
> class Status {
>   int code;
>   std::string msg;
> }
> {code}
> This avoids doing manual memcopies in the copy ctor and delete[]s in the 
> dtor.  It will also get rid of boilerplate null checks and casts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9715) Check storage ID uniqueness on datanode startup

2016-01-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9715:

Attachment: HDFS-9715.00.patch

Add a patch to check the existence of StorageUUID in {{FsDatasetImpl}}. It 
raises {{IOE}} if a storage with the same storage UUID already exists.

> Check storage ID uniqueness on datanode startup
> ---
>
> Key: HDFS-9715
> URL: https://issues.apache.org/jira/browse/HDFS-9715
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9715.00.patch
>
>
> We should fix this to check storage ID uniqueness on datanode startup. If 
> someone has manually edited the storage ID files, or if they have duplicated 
> a directory (or re-added an old disk) they could end up with a duplicate 
> storage ID and not realize it. 
> The HDFS-7575 fix does generate storage UUID for each storage, but not checks 
> the uniqueness of these UUIDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9715) Check storage ID uniqueness on datanode startup

2016-01-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9715:

Status: Patch Available  (was: Open)

> Check storage ID uniqueness on datanode startup
> ---
>
> Key: HDFS-9715
> URL: https://issues.apache.org/jira/browse/HDFS-9715
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9715.00.patch
>
>
> We should fix this to check storage ID uniqueness on datanode startup. If 
> someone has manually edited the storage ID files, or if they have duplicated 
> a directory (or re-added an old disk) they could end up with a duplicate 
> storage ID and not realize it. 
> The HDFS-7575 fix does generate storage UUID for each storage, but not checks 
> the uniqueness of these UUIDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120533#comment-15120533
 ] 

Hudson commented on HDFS-9677:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9196 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9196/])
Revert "HDFS-9677. Rename generationStampV1/generationStampV2 to (jing9: rev 
3a9571308e99cc374681bbc451a517d41a150aa0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSequentialBlockId.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfLegacyGenerationStampsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfV1GenerationStampsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
HDFS-9677. Rename generationStampV1/generationStampV2 to (jing9: rev 
ec25c7f9c7e60c077d8c4143253c20445fcdaecf)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfLegacyGenerationStampsException.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/OutOfV1GenerationStampsException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestSequentialBlockId.java


> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> 

[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120338#comment-15120338
 ] 

Jing Zhao commented on HDFS-9494:
-

Thanks [~demongaorui]. The 07 patch looks good to me. Some minor comments:
# In {{DFSStripedOutputStream}}, we do not need to move all the data fields to 
the beginning of the class.
# {{flushAllExecutor.shutdownNow()}} can be called in the finally section.
# flushAllFuturesMap does not need to be a class level field. It can be a 
temporary variable defined and used in {{flushAllInternals}}.


> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9715) Check storage ID uniqueness on datanode startup

2016-01-27 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-9715:
---

 Summary: Check storage ID uniqueness on datanode startup
 Key: HDFS-9715
 URL: https://issues.apache.org/jira/browse/HDFS-9715
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.2
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


We should fix this to check storage ID uniqueness on datanode startup. If 
someone has manually edited the storage ID files, or if they have duplicated a 
directory (or re-added an old disk) they could end up with a duplicate storage 
ID and not realize it. 

The HDFS-7575 fix does generate storage UUID for each storage, but not checks 
the uniqueness of these UUIDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6183) TestWebHDFS may fail with "Namenode is in startup mode"

2016-01-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120569#comment-15120569
 ] 

Mingliang Liu commented on HDFS-6183:
-

Still happens intermittently in recent builds, e.g. 
https://builds.apache.org/job/PreCommit-HDFS-Build/14269/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt

> TestWebHDFS may fail with "Namenode is in startup mode"
> ---
>
> Key: HDFS-6183
> URL: https://issues.apache.org/jira/browse/HDFS-6183
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>
> {noformat}
> java.lang.AssertionError: There are 1 exception(s):
>   Exception 0: java.io.IOException: Namenode is in startup mode
>   ... 
> at 
> org.apache.hadoop.hdfs.TestDFSClientRetries$6.run(TestDFSClientRetries.java:972)
>   ... 
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.assertEmpty(TestDFSClientRetries.java:1083)
>   at 
> org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:1003)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHDFS.testNamenodeRestart(TestWebHDFS.java:217)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9677:

Attachment: HDFS-9677-branch-2.000.patch

Thanks for your review [~jingzhao]. Attached the patch for branch-2.

> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9677-branch-2.000.patch, HDFS-9677.000.patch, 
> HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9716) o.a.h.hdfs.TestRecoverStripedFile fails intermittently in trunk

2016-01-27 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9716:
---

 Summary: o.a.h.hdfs.TestRecoverStripedFile fails intermittently in 
trunk
 Key: HDFS-9716
 URL: https://issues.apache.org/jira/browse/HDFS-9716
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Mingliang Liu


See recent builds:
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/14269/testReport/org.apache.hadoop.hdfs/TestRecoverStripedFile/testRecoverThreeDataBlocks1/
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8477/testReport/org.apache.hadoop.hdfs/TestRecoverStripedFile/testRecoverThreeDataBlocks/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9708) FSNamesystem.initAuditLoggers() doesn't trim classnames

2016-01-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120586#comment-15120586
 ] 

Mingliang Liu commented on HDFS-9708:
-

{{hadoop.hdfs.TestRecoverStripedFile}} is flaky and tracked by [HDFS-9716], 
{{hadoop.hdfs.web.TestWebHDFS}} by [HDFS-6183], 
{{org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs.testGetBlockLocations}} 
can not reproduce locally, and seems unrelated.

> FSNamesystem.initAuditLoggers() doesn't trim classnames
> ---
>
> Key: HDFS-9708
> URL: https://issues.apache.org/jira/browse/HDFS-9708
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-9708.000.patch, HDFS-9708.001.patch
>
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> The {{FSNamesystem.initAuditLoggers()}} method reads a list of audit loggers 
> from a call to {{ conf.getStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);}}
> What it doesn't do is trim each entry -so if there's a space or newline in the
> list, the classname is invalid and won't load, so HDFS wont come out to play.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8999:
--
Attachment: h8999_20160121c_branch-2.patch

Thanks Jing for reviewing the branch-2 patch!

h8999_20160121c_branch-2.patch: reverts some import changes.



> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch, h8999_20160106.patch, 
> h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch, 
> h8999_20160113.patch, h8999_20160114.patch, h8999_20160121.patch, 
> h8999_20160121b.patch, h8999_20160121c.patch, h8999_20160121c_branch-2.patch, 
> h8999_20160121c_branch-2.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120843#comment-15120843
 ] 

Xiaobing Zhou commented on HDFS-9701:
-

Thanks [~xiaochen] for the work. Could you explain why the step#2 is true?
{noformat}
Reconfigure task locks on the FsDatasetImpl object
{noformat}

It looks like Reconfigure task only locks on a member lock 
ReconfigurableBase#reconfigLock.

> DN may deadlock when hot-swapping under load
> 
>
> Key: HDFS-9701
> URL: https://issues.apache.org/jira/browse/HDFS-9701
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9701.01.patch, HDFS-9701.02.patch, 
> HDFS-9701.03.patch, HDFS-9701.04.patch, HDFS-9701.05.patch
>
>
> If the DN is under load (new blocks being written), a hot-swap task by {{hdfs 
> dfsadmin -reconfig}} may cause a dead lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-9494:
--
Status: In Progress  (was: Patch Available)

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120633#comment-15120633
 ] 

Jing Zhao commented on HDFS-9494:
-

I think we can still keep flushAllExecutorCompletionService as a class level 
field. In this way we can avoid shutting down the thread pool at the end of 
each flush and reuse threads.

For data fields order, I remember we should place static fields/class 
definitions before non-static class fields. Also that change is not related to 
this jira.

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2016-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120662#comment-15120662
 ] 

Hudson commented on HDFS-8578:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9198/])
HDFS-9654. Code refactoring for HDFS-8578. (szetszwo: rev 
662e17b46a0f41ade6a304e12925b70b5d09fc2f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> On upgrade, Datanode should process all storage/data dirs in parallel
> -
>
> Key: HDFS-8578
> URL: https://issues.apache.org/jira/browse/HDFS-8578
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Raju Bairishetti
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, 
> HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, 
> HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, 
> HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, 
> HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch, 
> HDFS-8578-15.patch, HDFS-8578-16.patch, HDFS-8578-17.patch, 
> HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch, 
> HDFS-8578-branch-2.7-002.patch, HDFS-8578-branch-2.7-003.patch, 
> h8578_20151210.patch, h8578_20151211.patch, h8578_20151211b.patch, 
> h8578_20151212.patch, h8578_20151213.patch, h8578_20160117.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs 
> sequentially. Assume it takes ~20 mins to process a single storage dir then  
> datanode which has ~10 disks will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>   assert getCTime() == nsInfo.getCTime() 
>   : "Data-node and name-node CTimes must be the same.";
> }
> {code}
> It would save lots of time during major upgrades if datanode process all 
> storagedirs/disks parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9654) Code refactoring for HDFS-8578

2016-01-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120664#comment-15120664
 ] 

Hudson commented on HDFS-9654:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9198/])
HDFS-9654. Code refactoring for HDFS-8578. (szetszwo: rev 
662e17b46a0f41ade6a304e12925b70b5d09fc2f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java


> Code refactoring for HDFS-8578
> --
>
> Key: HDFS-9654
> URL: https://issues.apache.org/jira/browse/HDFS-9654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9654_20160116.patch
>
>
> This is a code refactoring JIRA in order to change Datanode to process all 
> storage/data dirs in parallel; see also HDFS-8578.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9717) NameNode can not update the status of bad block

2016-01-27 Thread tangshangwen (JIRA)
tangshangwen created HDFS-9717:
--

 Summary: NameNode can not update the status of bad block
 Key: HDFS-9717
 URL: https://issues.apache.org/jira/browse/HDFS-9717
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: tangshangwen
Assignee: tangshangwen


In our cluster,some users set the number of replicas of file to 1, then back to 
2,the file cannot be read,but the NameNode think it is healthy
{noformat}
/user/username/dt=2015-11-30/dp=16/part-r-00063.lzo 1513716944 bytes, 12 
block(s):  Under replicated BP-1422437282658:blk_1897961957_824575827. Target 
Replicas is 2 but found 1 replica(s).
 Replica placement policy is violated for 
BP-1422437282658:blk_1897961957_824575827. Block should be additionally 
replicated on 1 more rack
(s).
0. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961824_824575694 
len=134217728 repl=2 [host1:50010, host2:50010]
1. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827 
len=134217728 repl=1 [host3:50010]
2. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897962047_824575917 
len=134217728 repl=2 [host4:50010, host1:50010]
..

Status: HEALTHY
 Total size:   1513716944 B
 Total dirs:   0
 Total files:  1
 Total symlinks:   0
 Total blocks (validated): 12 (avg. block size 126143078 B)
 Minimally replicated blocks:  12 (100.0 %)
 Over-replicated blocks:   0 (0.0 %)
 Under-replicated blocks:  1 (8.33 %)
 Mis-replicated blocks:1 (8.33 %)
 Default replication factor:   3
 Average block replication:1.916
 Corrupt blocks:   0
 Missing replicas: 1 (4.165 %)
 Number of data-nodes: 
 Number of racks:  xxx
FSCK ended at Thu Jan 28 10:27:49 CST 2016 in 0 milliseconds
{noformat}

But the  replica on the datanode has been damaged, can't read,this is datanode 
log
{noformat}
2016-01-23 06:34:42,737 WARN 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: First 
Verification failed for 
BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827

   
java.io.IOException: Input/output error 

   
at java.io.FileInputStream.readBytes(Native Method) 

   
at java.io.FileInputStream.read(FileInputStream.java:272)   

   
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) 

   
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:529)

 
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:710)

  
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:427)

at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:506)
   
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:667)
   
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:633)
 
at 
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:101)
  
at java.lang.Thread.run(Thread.java:745)
--
2016-01-28 10:28:37,874 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(host1, 
storageID=DS-1450783279-xxx.xxx.xxx.xxx-50010-1432889625435
, infoPort=50075, ipcPort=50020, 
storageInfo=lv=-47;cid=CID-3f36397d-b160-4414-b7e4-f37b72e96d53;nsid=1992344832;c=0):Failed
 to transfer BP-1337805335-xxx.xxx.xxx.xxx-142243
7282658:blk_1897961957_824575827 to xxx.xxx.xxx.xxx:50010 got
java.net.SocketException: Original Exception : java.io.IOException: 
Input/output error
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:433)
at 

[jira] [Updated] (HDFS-9654) Code refactoring for HDFS-8578

2016-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9654:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

I have committed this.

> Code refactoring for HDFS-8578
> --
>
> Key: HDFS-9654
> URL: https://issues.apache.org/jira/browse/HDFS-9654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: h9654_20160116.patch
>
>
> This is a code refactoring JIRA in order to change Datanode to process all 
> storage/data dirs in parallel; see also HDFS-8578.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2016-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8578:
--
Attachment: h8578_20160128.patch

HDFS-9654 is now committed (thanks [~vinayrpet] and [~ctrezzo] for reviewing 
it.)  The refactoring idea is great.  Here is a much smaller patch:

h8578_20160128.patch

> On upgrade, Datanode should process all storage/data dirs in parallel
> -
>
> Key: HDFS-8578
> URL: https://issues.apache.org/jira/browse/HDFS-8578
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Raju Bairishetti
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, 
> HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, 
> HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, 
> HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, 
> HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch, 
> HDFS-8578-15.patch, HDFS-8578-16.patch, HDFS-8578-17.patch, 
> HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch, 
> HDFS-8578-branch-2.7-002.patch, HDFS-8578-branch-2.7-003.patch, 
> h8578_20151210.patch, h8578_20151211.patch, h8578_20151211b.patch, 
> h8578_20151212.patch, h8578_20151213.patch, h8578_20160117.patch, 
> h8578_20160128.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs 
> sequentially. Assume it takes ~20 mins to process a single storage dir then  
> datanode which has ~10 disks will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>   doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>   assert getCTime() == nsInfo.getCTime() 
>   : "Data-node and name-node CTimes must be the same.";
> }
> {code}
> It would save lots of time during major upgrades if datanode process all 
> storagedirs/disks parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120792#comment-15120792
 ] 

Hadoop QA commented on HDFS-8578:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
458 unchanged - 0 fixed = 461 total (was 458) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 49m 39s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.TestFSEditLogLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784813/h8578_20160128.patch |
| JIRA Issue | HDFS-8578 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b324c40fa89c 3.13.0-36-lowlatency #63-Ubuntu 

[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120834#comment-15120834
 ] 

GAO Rui commented on HDFS-9494:
---

Oh, I see. I am agree with you. I have upload an new 08 patch to address 
previous mentioned issues. Thank you again for your detailed comments! 

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch, HDFS-9494-origin-trunk.08.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120837#comment-15120837
 ] 

Rakesh R commented on HDFS-7955:


oops, my previous patch was wrong. I've attached another one. Due to renaming 
of following classes the patch looks quite big.
- BlockECRecoveryCommand.java  to BlockECReconstructionCommand.java
- TestRecoverStripedFile.java to TestReconstructStripedFile.java
- TestRecoverStripedBlocks.java to TestReconstructStripedBlocks.java

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch, 
> HDFS-7955-003.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-27 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8999:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks everyone who has contributed on this.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: h8999_20151228.patch, h8999_20160106.patch, 
> h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch, 
> h8999_20160113.patch, h8999_20160114.patch, h8999_20160121.patch, 
> h8999_20160121b.patch, h8999_20160121c.patch, h8999_20160121c_branch-2.patch, 
> h8999_20160121c_branch-2.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8430) Erasure coding: compute file checksum for stripe files

2016-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120749#comment-15120749
 ] 

Kai Zheng commented on HDFS-8430:
-

Some patch attached on HDFS-9694. Am working on the recovery case, as the block 
group checksum is computed in datanode side, am refactoring ErasureCodingWorker 
to reuse the existing codes, as commented previously.

> Erasure coding: compute file checksum for stripe files
> --
>
> Key: HDFS-8430
> URL: https://issues.apache.org/jira/browse/HDFS-8430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Walter Su
>Assignee: Kai Zheng
> Attachments: HDFS-8430-poc1.patch
>
>
> HADOOP-3981 introduces a  distributed file checksum algorithm. It's designed 
> for replicated block.
> {{DFSClient.getFileChecksum()}} need some updates, so it can work for striped 
> block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120859#comment-15120859
 ] 

Xiao Chen commented on HDFS-9701:
-

Thanks [~xiaobingo] for looking at this. We actually have multiple locks along 
the reconfig stack. :)
In step 2, the locking I was referring to is inside FsDatasetImpl. Specifically 
[this 
line|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L486].
 Please see my first comment above for the stack trace.

> DN may deadlock when hot-swapping under load
> 
>
> Key: HDFS-9701
> URL: https://issues.apache.org/jira/browse/HDFS-9701
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9701.01.patch, HDFS-9701.02.patch, 
> HDFS-9701.03.patch, HDFS-9701.04.patch, HDFS-9701.05.patch
>
>
> If the DN is under load (new blocks being written), a hot-swap task by {{hdfs 
> dfsadmin -reconfig}} may cause a dead lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120875#comment-15120875
 ] 

Kai Zheng commented on HDFS-7955:
-

Thanks for the discussions. I have two questions.
* Considering in most cases in names we have used word like {{striped}}, 
{{striping}}, {{ErasureCoding}}, {{EC}} and etc. to specify the context or 
avoid the confusion, do we still need to do the rename for the names?
* OK, if we still do the rename, do we need to keep the word like {{striped}}? 
If we do, is there any case other than striping in use of word 
{{reconstruction}} or like that?

Thanks.

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch, 
> HDFS-7955-003.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9260) Improve performance and GC friendliness of startup and FBRs

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120593#comment-15120593
 ] 

Hadoop QA commented on HDFS-9260:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 6 new + 
705 unchanged - 11 fixed = 711 total (was 716) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 48s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 220m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | 

[jira] [Commented] (HDFS-9715) Check storage ID uniqueness on datanode startup

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120606#comment-15120606
 ] 

Hadoop QA commented on HDFS-9715:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
119 unchanged - 0 fixed = 120 total (was 119) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 132m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSFinalize |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
| JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestDFSFinalize |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784771/HDFS-9715.00.patch |
| JIRA Issue | HDFS-9715 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  

[jira] [Updated] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-01-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-9694:

Attachment: HDFS-9694-v1.patch

Uploaded a patch.
[~szetszwo], could you help take a quick look at it? As the patch refactors the 
existing {{DFSClient#getFileChecksum}} and {{DataXceiver#blockChecksum}} as I 
wanted to reuse the codes for striping case, do you think we should do the 
refactoring separately? Thanks for your hint.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120863#comment-15120863
 ] 

Hadoop QA commented on HDFS-9494:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client: patch 
generated 0 new + 30 unchanged - 1 fixed = 30 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784831/HDFS-9494-origin-trunk.08.patch
 |
| JIRA Issue | HDFS-9494 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b591ef4c8d1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-9677) Rename generationStampV1/generationStampV2 to legacyGenerationStamp/generationStamp

2016-01-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120373#comment-15120373
 ] 

Jing Zhao commented on HDFS-9677:
-

Thanks for the work Mingliang! Thanks for the review, [~vinayrpet]! The patch 
also looks good to me. +1. Will commit the patch shortly. 

[~liuml07], could you also post a patch for branch-2?

> Rename generationStampV1/generationStampV2 to 
> legacyGenerationStamp/generationStamp
> ---
>
> Key: HDFS-9677
> URL: https://issues.apache.org/jira/browse/HDFS-9677
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9677.000.patch, HDFS-9677.001.patch
>
>
> [comment|https://issues.apache.org/jira/browse/HDFS-9542?focusedCommentId=15110531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15110531]
>  from [~drankye] in HDFS-9542:
> {quote}
> Just wonder if it's a good idea to rename: generationStampV1 => 
> legacyGenerationStamp; generationStampV2 => generationStamp, similar for 
> other variables, as we have legacy block and block.
> {quote}
> This jira plans to do this rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.

2016-01-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120384#comment-15120384
 ] 

Jing Zhao commented on HDFS-8999:
-

Thanks for posting the branch-2 patch, Nicholas! The branch-2 patch looks good 
to me. +1. 

Some changes on imports can be removed. You can do it while committing the 
patch.

> Namenode need not wait for {{blockReceived}} for the last block before 
> completing a file.
> -
>
> Key: HDFS-8999
> URL: https://issues.apache.org/jira/browse/HDFS-8999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jitendra Nath Pandey
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8999_20151228.patch, h8999_20160106.patch, 
> h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch, 
> h8999_20160113.patch, h8999_20160114.patch, h8999_20160121.patch, 
> h8999_20160121b.patch, h8999_20160121c.patch, h8999_20160121c_branch-2.patch
>
>
> This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment 
> from the jira:
> {quote}
> ...whether we need to let NameNode wait for all the block_received msgs to 
> announce the replica is safe. Looking into the code, now we have
># NameNode knows the DataNodes involved when initially setting up the 
> writing pipeline
># If any DataNode fails during the writing, client bumps the GS and 
> finally reports all the DataNodes included in the new pipeline to NameNode 
> through the updatePipeline RPC.
># When the client received the ack for the last packet of the block (and 
> before the client tries to close the file on NameNode), the replica has been 
> finalized in all the DataNodes.
> Then in this case, when NameNode receives the close request from the client, 
> the NameNode already knows the latest replicas for the block. Currently the 
> checkReplication call only counts in all the replicas that NN has already 
> received the block_received msg, but based on the above #2 and #3, it may be 
> safe to also count in all the replicas in the 
> BlockUnderConstructionFeature#replicas?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9708) FSNamesystem.initAuditLoggers() doesn't trim classnames

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120496#comment-15120496
 ] 

Hadoop QA commented on HDFS-9708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 186 unchanged - 1 fixed = 186 total (was 187) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 33s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
| JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784738/HDFS-9708.001.patch |
| JIRA Issue | HDFS-9708 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 341944709c1c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Commented] (HDFS-9712) libhdfs++: Reimplement Status object as a normal struct

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120511#comment-15120511
 ] 

Hadoop QA commented on HDFS-9712:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 59s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 57s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784763/HDFS-9712.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-9712 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux d69323963050 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 6df167c |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_72 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14271/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 75MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14271/console |


This message was automatically generated.



> libhdfs++: Reimplement Status object as a normal struct
> ---
>
> Key: HDFS-9712
> URL: https://issues.apache.org/jira/browse/HDFS-9712
> 

[jira] [Commented] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120575#comment-15120575
 ] 

Xiao Chen commented on HDFS-9701:
-

Failed tests seem unrelated and are known to be flaky (e.g. HDFS-9466)

> DN may deadlock when hot-swapping under load
> 
>
> Key: HDFS-9701
> URL: https://issues.apache.org/jira/browse/HDFS-9701
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9701.01.patch, HDFS-9701.02.patch, 
> HDFS-9701.03.patch, HDFS-9701.04.patch, HDFS-9701.05.patch
>
>
> If the DN is under load (new blocks being written), a hot-swap task by {{hdfs 
> dfsadmin -reconfig}} may cause a dead lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-9494:
--
Attachment: HDFS-9494-origin-trunk.07.patch

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9494) Parallel optimization of DFSStripedOutputStream#flushAllInternals( )

2016-01-27 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-9494:
--
Status: In Progress  (was: Patch Available)

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> 
>
> Key: HDFS-9494
> URL: https://issues.apache.org/jira/browse/HDFS-9494
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
>Assignee: GAO Rui
>Priority: Minor
> Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch, HDFS-9494-origin-trunk.02.patch, 
> HDFS-9494-origin-trunk.03.patch, HDFS-9494-origin-trunk.04.patch, 
> HDFS-9494-origin-trunk.05.patch, HDFS-9494-origin-trunk.06.patch, 
> HDFS-9494-origin-trunk.07.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support "unbuffer"

2016-01-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119738#comment-15119738
 ] 

Junping Du commented on HDFS-7694:
--

I have cherry-pick it to branch-2.6.

> FSDataInputStream should support "unbuffer"
> ---
>
> Key: HDFS-7694
> URL: https://issues.apache.org/jira/browse/HDFS-7694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0, 2.6.4
>
> Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
> HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch
>
>
> For applications that have many open HDFS (or other Hadoop filesystem) files, 
> it would be useful to have an API to clear readahead buffers and sockets.  
> This could be added to the existing APIs as an optional interface, in much 
> the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7694) FSDataInputStream should support "unbuffer"

2016-01-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7694:
-
Fix Version/s: 2.6.4

> FSDataInputStream should support "unbuffer"
> ---
>
> Key: HDFS-7694
> URL: https://issues.apache.org/jira/browse/HDFS-7694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.7.0, 2.6.4
>
> Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch, 
> HDFS-7694.003.patch, HDFS-7694.004.patch, HDFS-7694.005.patch
>
>
> For applications that have many open HDFS (or other Hadoop filesystem) files, 
> it would be useful to have an API to clear readahead buffers and sockets.  
> This could be added to the existing APIs as an optional interface, in much 
> the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9701:

Attachment: HDFS-9701.05.patch

> DN may deadlock when hot-swapping under load
> 
>
> Key: HDFS-9701
> URL: https://issues.apache.org/jira/browse/HDFS-9701
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9701.01.patch, HDFS-9701.02.patch, 
> HDFS-9701.03.patch, HDFS-9701.04.patch, HDFS-9701.05.patch
>
>
> If the DN is under load (new blocks being written), a hot-swap task by {{hdfs 
> dfsadmin -reconfig}} may cause a dead lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119761#comment-15119761
 ] 

Xiao Chen commented on HDFS-9701:
-

Ah Good catch [~vinayrpet]! And thanks for reviewing. I added the second 
parameter to the method solely for that reason, yet I didn't pass in the 
{{checkDirsMutex}}
Patch 5 fixes this.

> DN may deadlock when hot-swapping under load
> 
>
> Key: HDFS-9701
> URL: https://issues.apache.org/jira/browse/HDFS-9701
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9701.01.patch, HDFS-9701.02.patch, 
> HDFS-9701.03.patch, HDFS-9701.04.patch, HDFS-9701.05.patch
>
>
> If the DN is under load (new blocks being written), a hot-swap task by {{hdfs 
> dfsadmin -reconfig}} may cause a dead lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119711#comment-15119711
 ] 

Hadoop QA commented on HDFS-9705:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 52 unchanged 
- 1 fixed = 54 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 35s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 108m 54s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 252m 45s {color} 
| {color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119714#comment-15119714
 ] 

Yongjun Zhang commented on HDFS-9706:
-

Hi [~xiaochen],

Thanks for working on this. Couple of suggestions:

* Instead of printing {{isTransfer}}, it may be better just to print 
{{stage}}'s value.
* We are printing almost all parameters except 3 or 4 others. I'd suggest that 
we take this opportunity to make a complete printing, and order it as how the 
parameters are passed to the method. (that is, one day we may need to know the 
value of the other parameters when investigating a new issue), 

Thanks.


> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9713) DataXceiver#copyBlock should return if block is pinned

2016-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120939#comment-15120939
 ] 

Kai Zheng commented on HDFS-9713:
-

Good catch Uma. I thought you're right.

> DataXceiver#copyBlock should return if block is pinned
> --
>
> Key: HDFS-9713
> URL: https://issues.apache.org/jira/browse/HDFS-9713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>
> in DataXceiver#copyBlock
> {code}
>   if (datanode.data.getPinning(block)) {
>   String msg = "Not able to copy block " + block.getBlockId() + " " +
>   "to " + peer.getRemoteAddressString() + " because it's pinned ";
>   LOG.info(msg);
>   sendResponse(ERROR, msg);
> }
> {code}
> I think we should return back instead of proceeding to send block.as we 
> already sent ERROR here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9719) Refactoring ErasureCodingWorker into smaller reusable constructs

2016-01-27 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-9719:
---

 Summary: Refactoring ErasureCodingWorker into smaller reusable 
constructs
 Key: HDFS-9719
 URL: https://issues.apache.org/jira/browse/HDFS-9719
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This would suggest and refactor {{ErasureCodingWorker}} into smaller constructs 
to be reused in other places like block group checksum computing in datanode 
side. As discussed in HDFS-8430 and implemented in HDFS-9694 patch, checksum 
computing for striped block groups would be distributed to datanode in the 
group, where data block data should be able to be reconstructed when 
missed/corrupted to recompute the block checksum. The most needed codes are in 
the current ErasureCodingWorker and could be reused in order to avoid 
duplication. Fortunately, we have very good and complete tests, which would 
make the refactoring much easier. The refactoring will also help a lot for 
subsequent tasks in phase II for non-striping erasure coded files and blocks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9406) FSImage corruption after taking snapshot

2016-01-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9406:

Status: Patch Available  (was: Open)

> FSImage corruption after taking snapshot
> 
>
> Key: HDFS-9406
> URL: https://issues.apache.org/jira/browse/HDFS-9406
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: CentOS 6 amd64, CDH 5.4.4-1
> 2xCPU: Intel(R) Xeon(R) CPU E5-2640 v3
> Memory: 32GB
> Namenode blocks: ~700_000 blocks, no HA setup
>Reporter: Stanislav Antic
>Assignee: Yongjun Zhang
> Attachments: HDFS-9406.001.patch
>
>
> FSImage corruption happened after HDFS snapshots were taken. Cluster was not 
> used
> at that time.
> When namenode restarts it reported NULL pointer exception:
> {code}
> 15/11/07 10:01:15 INFO namenode.FileJournalManager: Recovering unfinalized 
> segments in /tmp/fsimage_checker_5857/fsimage/current
> 15/11/07 10:01:15 INFO namenode.FSImage: No edit log streams selected.
> 15/11/07 10:01:18 INFO namenode.FSImageFormatPBINode: Loading 1370277 INodes.
> 15/11/07 10:01:27 ERROR namenode.NameNode: Failed to start namenode.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addChild(INodeDirectory.java:531)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:252)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:261)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:180)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:929)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:913)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:732)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:668)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:643)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:810)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:794)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
> 15/11/07 10:01:27 INFO util.ExitUtil: Exiting with status 1
> {code}
> Corruption happened after "07.11.2015 00:15", and after that time blocks 
> ~9300 blocks were invalidated that shouldn't be.
> After recovering FSimage I discovered that around ~9300 blocks were missing.
> -I also attached log of namenode before and after corruption happened.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9406) FSImage corruption after taking snapshot

2016-01-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9406:

Attachment: HDFS-9406.001.patch

> FSImage corruption after taking snapshot
> 
>
> Key: HDFS-9406
> URL: https://issues.apache.org/jira/browse/HDFS-9406
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
> Environment: CentOS 6 amd64, CDH 5.4.4-1
> 2xCPU: Intel(R) Xeon(R) CPU E5-2640 v3
> Memory: 32GB
> Namenode blocks: ~700_000 blocks, no HA setup
>Reporter: Stanislav Antic
>Assignee: Yongjun Zhang
> Attachments: HDFS-9406.001.patch
>
>
> FSImage corruption happened after HDFS snapshots were taken. Cluster was not 
> used
> at that time.
> When namenode restarts it reported NULL pointer exception:
> {code}
> 15/11/07 10:01:15 INFO namenode.FileJournalManager: Recovering unfinalized 
> segments in /tmp/fsimage_checker_5857/fsimage/current
> 15/11/07 10:01:15 INFO namenode.FSImage: No edit log streams selected.
> 15/11/07 10:01:18 INFO namenode.FSImageFormatPBINode: Loading 1370277 INodes.
> 15/11/07 10:01:27 ERROR namenode.NameNode: Failed to start namenode.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addChild(INodeDirectory.java:531)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.addToParent(FSImageFormatPBINode.java:252)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:261)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:180)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:929)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:913)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:732)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:668)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1061)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:765)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:643)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:810)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:794)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
> 15/11/07 10:01:27 INFO util.ExitUtil: Exiting with status 1
> {code}
> Corruption happened after "07.11.2015 00:15", and after that time blocks 
> ~9300 blocks were invalidated that shouldn't be.
> After recovering FSimage I discovered that around ~9300 blocks were missing.
> -I also attached log of namenode before and after corruption happened.-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120935#comment-15120935
 ] 

Hadoop QA commented on HDFS-7955:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs-project: patch generated 46 new + 586 
unchanged - 23 fixed = 632 total (was 609) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 32 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK 

[jira] [Updated] (HDFS-9657) Schedule EC tasks at proper time to reduce the impact of recovery traffic

2016-01-27 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-9657:

Attachment: HDFS-9657-001.patch

> Schedule EC tasks at proper time to reduce the impact of recovery traffic
> -
>
> Key: HDFS-9657
> URL: https://issues.apache.org/jira/browse/HDFS-9657
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-9657-001.patch
>
>
> The EC recover tasks consume a lot of network bandwidth and disk I/O. 
> Recovering a corrupt block requires transferring 6 blocks , hence creating a 
> 6X overhead in network bandwidth and disk I/O.  When a datanode fails , the 
> recovery of the whole blocks on this datanode may use up the network 
> bandwith.  We need to start a recovery task at a proper time in order to give 
> less impact to the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-01-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120958#comment-15120958
 ] 

Kai Zheng commented on HDFS-9694:
-

Am working on datanode failure error handling case where data of 
missed/corrupted data blocks need to be reconstructed for checksum recomputing.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9718) NameNode#initializeGenericKeys don't unset the property while it is not assign on special nnId

2016-01-27 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI updated HDFS-9718:
---
Attachment: HDFS-9718.001.patch

> NameNode#initializeGenericKeys don't unset the property while it is not 
> assign on special nnId
> --
>
> Key: HDFS-9718
> URL: https://issues.apache.org/jira/browse/HDFS-9718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: DENG FEI
>Assignee: DENG FEI
> Attachments: HDFS-9718.001.patch
>
>
> Scenario:
> We want enable NN dfs.namenode.servicerpc-address to separate the client and 
> DN RPC,and do it begin with standby NN--just add the property to standby NN 
> and DNs,when EditLogTailer choose active NN address,it only reset the active 
> NN's nnId generic keys ,but don't unset if not special,so the standby NN 
> choose itself as active NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9717) NameNode can not update the status of bad block

2016-01-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-9717:
-
Component/s: namenode

> NameNode can not update the status of bad block
> ---
>
> Key: HDFS-9717
> URL: https://issues.apache.org/jira/browse/HDFS-9717
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> In our cluster,some users set the number of replicas of file to 1, then back 
> to 2,the file cannot be read,but the NameNode think it is healthy
> {noformat}
> /user/username/dt=2015-11-30/dp=16/part-r-00063.lzo 1513716944 bytes, 12 
> block(s):  Under replicated BP-1422437282658:blk_1897961957_824575827. Target 
> Replicas is 2 but found 1 replica(s).
>  Replica placement policy is violated for 
> BP-1422437282658:blk_1897961957_824575827. Block should be additionally 
> replicated on 1 more rack
> (s).
> 0. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961824_824575694 
> len=134217728 repl=2 [host1:50010, host2:50010]
> 1. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827 
> len=134217728 repl=1 [host3:50010]
> 2. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897962047_824575917 
> len=134217728 repl=2 [host4:50010, host1:50010]
> ..
> Status: HEALTHY
>  Total size:   1513716944 B
>  Total dirs:   0
>  Total files:  1
>  Total symlinks:   0
>  Total blocks (validated): 12 (avg. block size 126143078 B)
>  Minimally replicated blocks:  12 (100.0 %)
>  Over-replicated blocks:   0 (0.0 %)
>  Under-replicated blocks:  1 (8.33 %)
>  Mis-replicated blocks:1 (8.33 %)
>  Default replication factor:   3
>  Average block replication:1.916
>  Corrupt blocks:   0
>  Missing replicas: 1 (4.165 %)
>  Number of data-nodes: 
>  Number of racks:  xxx
> FSCK ended at Thu Jan 28 10:27:49 CST 2016 in 0 milliseconds
> {noformat}
> But the  replica on the datanode has been damaged, can't read,this is 
> datanode log
> {noformat}
> 2016-01-23 06:34:42,737 WARN 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: First 
> Verification failed for 
> BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827  
>   
>
> java.io.IOException: Input/output error   
>   
>
> at java.io.FileInputStream.readBytes(Native Method)   
>   
>
> at java.io.FileInputStream.read(FileInputStream.java:272) 
>   
>
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)   
>   
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:529)
>   
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:710)
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:427)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:506)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:667)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:633)
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:101)
>   
> at java.lang.Thread.run(Thread.java:745)
> --
> 2016-01-28 10:28:37,874 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(host1, 
> storageID=DS-1450783279-xxx.xxx.xxx.xxx-50010-1432889625435
> , infoPort=50075, ipcPort=50020, 
> 

[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119890#comment-15119890
 ] 

Yongjun Zhang commented on HDFS-9706:
-

Thanks Xiao for the new rev. One further suggestion is, we can put multiple 
entries to the same line so we have fewer lines in the log, what about:

{code}
2016-01-27 09:34:12,849 [DataXceiver for client cl at /127.0.0.1:50088 
[Receiving block BP-908057294-192.168.1.79-1453916050495:blk_1073741825_1001]] 
DEBUG datanode.DataNode (BlockReceiver.java:(189)) - BlockReceiver: 
BP-908057294-192.168.1.79-1453916050495:blk_1073741825_1001
 storageType=DISK, inAddr=/127.0.0.1:50088, myAddr=/127.0.0.1:50079, 
 stage=DATA_STREAMING, newGs=0, minBytesRcvd=1, maxBytesRcvd=1,
 clientname=cl, srcDataNode=:0, datanode=127.0.0.1:50079
 requestedChecksum=DataChecksum(type=CRC32C, chunkSize=512), 
cachingStrategy=CachingStrategy(dropBehind=null, readahead=null)
 allowLazyPersist=false, pinning=false, isClient=true, isDatanode=false, 
responseInterval=3
{code}

Thanks.


> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-27 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: (was: HDFS-9686.patch.0)

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-27 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.0.patch

Fixed style.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.0.patch
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9706:

Attachment: HDFS-9706.02.patch

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9684) DataNode stopped sending heartbeat after getting OutOfMemoryError form DataTransfer thread.

2016-01-27 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119871#comment-15119871
 ] 

Surendra Singh Lilhore commented on HDFS-9684:
--

Yes, DN should have some healthcheck to monitor all the service threads.

For OutOfMemoryError one discussion happened in HDFS-2911 and I think 
conclusion is to kill the DN in case of OOM.


> DataNode stopped sending heartbeat after getting OutOfMemoryError form 
> DataTransfer thread.
> ---
>
> Key: HDFS-9684
> URL: https://issues.apache.org/jira/browse/HDFS-9684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: HDFS-9684.01.patch
>
>
> {noformat}
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:714)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlock(DataNode.java:1999)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.transferBlocks(DataNode.java:2008)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:657)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:615)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:857)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:823)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119862#comment-15119862
 ] 

Xiao Chen commented on HDFS-9706:
-

Thanks Yongjun for the comments.
I agree with your suggestions. Patch 2 rewrote the debug log, to match the 
parameters passed in. After that, I additionally printed {{isClient}}, 
{{isDatanode}}, and {{responseInterval}}. These 3 can be deducted from input or 
read from the config of the input datanode, but since this is debug log, I 
think having 1 more line to save some developer time is a good tradeoff.

Below is a sample output of the log:
{noformat}
2016-01-27 09:34:12,849 [DataXceiver for client cl at /127.0.0.1:50088 
[Receiving block BP-908057294-192.168.1.79-1453916050495:blk_1073741825_1001]] 
DEBUG datanode.DataNode (BlockReceiver.java:(189)) - BlockReceiver: 
BP-908057294-192.168.1.79-1453916050495:blk_1073741825_1001
 storageType=DISK, inAddr=/127.0.0.1:50088
 myAddr=/127.0.0.1:50079, stage=DATA_STREAMING, newGs=0
 minBytesRcvd=1
 maxBytesRcvd=1, clientname=cl
 srcDataNode=:0
 datanode=127.0.0.1:50079
 requestedChecksum=DataChecksum(type=CRC32C, chunkSize=512)
 cachingStrategy=CachingStrategy(dropBehind=null, readahead=null)
 allowLazyPersist=false, pinning=false
 isClient=true, isDatanode=false
 responseInterval=3
{noformat}

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8960) DFS client says "no more good datanodes being available to try" on a single drive failure

2016-01-27 Thread Ruslan Dautkhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119793#comment-15119793
 ] 

Ruslan Dautkhanov commented on HDFS-8960:
-

We seems getting the same problem on a Hive job too:

{quote}
Error: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Failed 
to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[10.20.32.60:1004,DS-1cc9c7cd-f1f9-4cad-b6e2-c9821d644033,DISK]],
 
original=[DatanodeInfoWithStorage[10.20.32.60:1004,DS-1cc9c7cd-f1f9-4cad-b6e2-c9821d644033,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration. at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:265) at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) at 
org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:415) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Failed 
to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[10.20.32.60:1004,DS-1cc9c7cd-f1f9-4cad-b6e2-c9821d644033,DISK]],
 
original=[DatanodeInfoWithStorage[10.20.32.60:1004,DS-1cc9c7cd-f1f9-4cad-b6e2-c9821d644033,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration. at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:729)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1047)
 at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.flushHashTable(GroupByOperator.java:1015)
 at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:833)
 at 
{quote}

> DFS client says "no more good datanodes being available to try" on a single 
> drive failure
> -
>
> Key: HDFS-8960
> URL: https://issues.apache.org/jira/browse/HDFS-8960
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.1
> Environment: openjdk version "1.8.0_45-internal"
> OpenJDK Runtime Environment (build 1.8.0_45-internal-b14)
> OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Benoit Sigoure
> Attachments: blk_1073817519_77099.log, r12s13-datanode.log, 
> r12s16-datanode.log
>
>
> Since we upgraded to 2.7.1 we regularly see single-drive failures cause 
> widespread problems at the HBase level (with the default 3x replication 
> target).
> Here's an example.  This HBase RegionServer is r12s16 (172.24.32.16) and is 
> writing its WAL to [172.24.32.16:10110, 172.24.32.8:10110, 
> 172.24.32.13:10110] as can be seen by the following occasional messages:
> {code}
> 2015-08-23 06:28:40,272 INFO  [sync.3] wal.FSHLog: Slow sync cost: 123 ms, 
> current pipeline: [172.24.32.16:10110, 172.24.32.8:10110, 172.24.32.13:10110]
> {code}
> A bit later, the second node in the pipeline above is going to experience an 
> HDD failure.
> {code}
> 2015-08-23 07:21:58,720 WARN  [DataStreamer for file 
> /hbase/WALs/r12s16.sjc.aristanetworks.com,9104,1439917659071/r12s16.sjc.aristanetworks.com%2C9104%2C1439917659071.default.1440314434998
>  block BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099] 
> hdfs.DFSClient: Error Recovery for block 
> BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099 in pipeline 
> 172.24.32.16:10110, 172.24.32.13:10110, 172.24.32.8:10110: bad datanode 
> 172.24.32.8:10110
> {code}
> And then HBase will go like "omg I can't write to my WAL, let me commit 
> suicide".
> {code}
> 2015-08-23 07:22:26,060 FATAL 
> [regionserver/r12s16.sjc.aristanetworks.com/172.24.32.16:9104.append-pool1-t1]
>  wal.FSHLog: Could not append. Requesting close of wal
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[172.24.32.16:10110, 172.24.32.13:10110], 
> original=[172.24.32.16:10110, 172.24.32.13:10110]). The current failed 
> datanode replacement policy is DEFAULT, and a client may configure this via 

[jira] [Updated] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9706:

Attachment: HDFS-9706.02.patch

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9706:

Attachment: (was: HDFS-9706.02.patch)

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9706:

Attachment: HDFS-9706.03.patch

Thanks Yongjun for the advice, makes sense to me. Attaching patch 3 per 
suggested.

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch, 
> HDFS-9706.03.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7955:
---
Attachment: HDFS-7955-002.patch

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9330) Support reconfiguring dfs.datanode.duplicate.replica.deletion without DN restart

2016-01-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120005#comment-15120005
 ] 

Xiaobing Zhou commented on HDFS-9330:
-

Thanks [~arpitagarwal] for quite detailed reviews. I posted V004 that addressed 
your first comment out of three. I will comment the last one on HDFS-6808, 
discuss the second with you.

> Support reconfiguring dfs.datanode.duplicate.replica.deletion without DN 
> restart 
> -
>
> Key: HDFS-9330
> URL: https://issues.apache.org/jira/browse/HDFS-9330
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9330-HDFS-9000.002.patch, 
> HDFS-9330-HDFS-9000.003.patch, HDFS-9330-HDFS-9000.004.patch, 
> HDFS-9330.001.patch
>
>
> This is to reconfigure
> {code}
> dfs.datanode.duplicate.replica.deletion
> {code}
> without restarting DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9330) Support reconfiguring dfs.datanode.duplicate.replica.deletion without DN restart

2016-01-27 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9330:

Attachment: HDFS-9330-HDFS-9000.004.patch

> Support reconfiguring dfs.datanode.duplicate.replica.deletion without DN 
> restart 
> -
>
> Key: HDFS-9330
> URL: https://issues.apache.org/jira/browse/HDFS-9330
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9330-HDFS-9000.002.patch, 
> HDFS-9330-HDFS-9000.003.patch, HDFS-9330-HDFS-9000.004.patch, 
> HDFS-9330.001.patch
>
>
> This is to reconfigure
> {code}
> dfs.datanode.duplicate.replica.deletion
> {code}
> without restarting DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9654) Code refactoring for HDFS-8578

2016-01-27 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119965#comment-15119965
 ] 

Chris Trezzo commented on HDFS-9654:


Sounds good! +1 on the patch. Thanks [~szetszwo].

> Code refactoring for HDFS-8578
> --
>
> Key: HDFS-9654
> URL: https://issues.apache.org/jira/browse/HDFS-9654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9654_20160116.patch
>
>
> This is a code refactoring JIRA in order to change Datanode to process all 
> storage/data dirs in parallel; see also HDFS-8578.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7955:
---
Component/s: erasure-coding

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120011#comment-15120011
 ] 

Yongjun Zhang commented on HDFS-9706:
-

Thanks Xiao for the new rev. +1 pending jenkins.

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch, 
> HDFS-9706.03.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15119989#comment-15119989
 ] 

Rakesh R commented on HDFS-7955:


Thank you [~zhz], [~andrew.wang], [~szetszwo] for the review comments so far.

I could see lots of refactoring required to complete this task. So I think we 
could split this jira into multiple sub-tasks. To begin with, I've considered 
only the {{ECRecoveryWork}} related code changes and attached the patch for the 
same. Kindly review. If everyone agrees will identify and raise separate 
sub-tasks for other modules/logical categories. Welcome comments.

Thank you [~umamaheswararao] for the offline discussions.

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7955:
---
Status: Patch Available  (was: In Progress)

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9610) test_libhdfs_threaded_hdfs_static generates a lot of noise on stderr which looks like a failure even though it isn't

2016-01-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120033#comment-15120033
 ] 

Colin Patrick McCabe commented on HDFS-9610:


The cmakebuilder maven plugin always puts stderr output in two places: a 
{{.stderr}} file, and in Maven's output.  (The cmakebuilder plugin also saves 
stdout to a {{.stdout}} file, but doesn't echo it to maven's output.)  I guess 
my idea is that the suppression flag would just be changing it so that the 
stderr output doesn't go to maven's stdout, but still continues to go to the 
{{.stderr}} file.  And we'd only need the suppression flag for "noisy" tests.

bq. Thanks for the pointer; I just checked out 
hadoop-common-project/hadoop-common/pom.xml and saw a few good examples of 
that. I've been writing little sanity tests for HDFS-8765 and HDFS-9227 on my 
side already so it's good to know I can reuse them once I get around to 
finishing those patches.

Yeah, it would be great to see more native tests.  And to fix some of the flaky 
ones we have (see YARN-4594)

> test_libhdfs_threaded_hdfs_static generates a lot of noise on stderr which 
> looks like a failure even though it isn't
> 
>
> Key: HDFS-9610
> URL: https://issues.apache.org/jira/browse/HDFS-9610
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Assignee: James Clampffer
> Attachments: HDFS-9610.HDFS-8707.000.patch, LastTest.log
>
>
> Playing around with adding ctest output support to Yetus, and I stumbled upon 
> a case where the tests throw errors left and right but claim success.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6808) Add command line option to ask DataNode reload configuration.

2016-01-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120032#comment-15120032
 ] 

Xiaobing Zhou commented on HDFS-6808:
-

Thanks [~arpitagarwal] for the point. I think the inconsistency between 
ReconfigurableBase#reconfigureProperty and ReconfigurationThread#run should be 
handled in ReconfigurationThread#run by adding the snippet as the following:
{noformat} 
  this.parent.reconfigurePropertyImpl(change.prop, change.newVal);
  if (change.newVal != null) {
this.parent.getConf().set(change.prop, change.newVal);
  } else {
this.parent.getConf().unset(change.prop);
  }
{noformat} 

This follows the same pattern like ReconfigurableBase#reconfigureProperty. 
Thoughts?

> Add command line option to ask DataNode reload configuration.
> -
>
> Key: HDFS-6808
> URL: https://issues.apache.org/jira/browse/HDFS-6808
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.6.0
>
> Attachments: HDFS-6808.000.combo.patch, HDFS-6808.000.patch, 
> HDFS-6808.001.combo.patch, HDFS-6808.001.patch, HDFS-6808.002.combo.patch, 
> HDFS-6808.002.patch, HDFS-6808.003.combo.txt, HDFS-6808.003.patch, 
> HDFS-6808.004.combo.patch, HDFS-6808.004.patch, HDFS-6808.005.combo.patch, 
> HDFS-6808.005.patch, HDFS-6808.006.combo.patch, HDFS-6808.006.patch, 
> HDFS-6808.007.combo.patch, HDFS-6808.007.patch, HDFS-6808.008.combo.patch, 
> HDFS-6808.008.patch, HDFS-6808.009.combo.patch, HDFS-6808.009.patch, 
> HDFS-6808.010.patch, HDFS-6808.011.patch
>
>
> The workflow of dynamically changing data volumes on DataNode is
> # Users manually changed {{dfs.datanode.data.dir}} in the configuration file
> # User use command line to notify DN to reload configuration and updates its 
> volumes. 
> This work adds command line support to notify DN to reload configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9701) DN may deadlock when hot-swapping under load

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120034#comment-15120034
 ] 

Hadoop QA commented on HDFS-9701:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 134 unchanged - 1 fixed = 134 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784678/HDFS-9701.05.patch |
| JIRA Issue | HDFS-9701 |
| Optional Tests |  asflicense  compile  javac  

[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120069#comment-15120069
 ] 

Hadoop QA commented on HDFS-7955:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HDFS-7955 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784700/HDFS-7955-002.patch |
| JIRA Issue | HDFS-7955 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14265/console |


This message was automatically generated.



> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120091#comment-15120091
 ] 

Hadoop QA commented on HDFS-9706:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 57 unchanged - 1 fixed = 57 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 6s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120097#comment-15120097
 ] 

Xiao Chen commented on HDFS-9706:
-

Thanks Yongjun!
Failed tests look unrelated, and as mentioned earlier, 'This is a logging 
change, so no tests added'.

> Log more details in debug logs in BlockReceiver's constructor
> -
>
> Key: HDFS-9706
> URL: https://issues.apache.org/jira/browse/HDFS-9706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-9706.01.patch, HDFS-9706.02.patch, 
> HDFS-9706.03.patch
>
>
> Currently {{BlockReceiver}}'s constructor has some debug logs to help 
> identifying problems. During my triage of HDFS-9701, I needed to add the 
> {{isTransfer}} into the logs to see which block the code goes later.
> I propose to add more details in the debug logs, to save future effort. Will 
> also see whether more details need to be logged.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HDFS-7035) Make adding a new data directory to the DataNode an atomic operation and improve error handling

2016-01-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7035:

Comment: was deleted

(was: -There is probably one more bug. The behavior of {{ReconfigurableBase}} 
is not consistent when calling {{reconfigurePropertyImpl}}.-

-{{ReconfigurationThread#run}} does not update the value in the cached 
configuration object whereas {{ReconfigurableBase#reconfigureProperty}} does. 
The contract of {{reconfigurePropertyImpl}} does not specify who is supposed to 
update it.-

-You may want to hold off posting an updated patch for this and related Jiras 
until we get this part answered.-

Ignore this, the comment was intended for HDFS-9330.)

> Make adding a new data directory to the DataNode an atomic operation and 
> improve error handling
> ---
>
> Key: HDFS-7035
> URL: https://issues.apache.org/jira/browse/HDFS-7035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-7035.000.combo.patch, HDFS-7035.000.patch, 
> HDFS-7035.001.combo.patch, HDFS-7035.001.patch, HDFS-7035.002.patch, 
> HDFS-7035.003.patch, HDFS-7035.003.patch, HDFS-7035.004.patch, 
> HDFS-7035.005.patch, HDFS-7035.007.patch, HDFS-7035.008.patch, 
> HDFS-7035.009.patch, HDFS-7035.010.patch, HDFS-7035.010.patch, 
> HDFS-7035.011.patch, HDFS-7035.012.patch, HDFS-7035.013.patch, 
> HDFS-7035.014.patch, HDFS-7035.015.patch, HDFS-7035.016.patch
>
>
> It refactors {{DataStorage}} and {{BlockPoolSliceStorage}} to reduce the 
> duplicate code and supports atomic adding volume operations. Also it 
> parallels loading data volume operation: each thread loads one volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7035) Make adding a new data directory to the DataNode an atomic operation and improve error handling

2016-01-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120051#comment-15120051
 ] 

Arpit Agarwal commented on HDFS-7035:
-

bq. If it is a concern in HDFS-9330, can we suggest reconfigurePropertyImpl() 
to return a value that is stored in conf eventually?
Thanks [~eddyxu], this is a good suggestion. Not sure we can change the 
behavior of {{reconfigurePropertyImpl}} for backwards-compatibility. We never 
published {{ReconfigurableBase}} as a public API so it may be okay.

If we want to be conservative and retain compatibility we can add a new 
overload with a default implementation that throws. {{ReconfigurableBase}} 
would have to try both. I'll file a jira.


> Make adding a new data directory to the DataNode an atomic operation and 
> improve error handling
> ---
>
> Key: HDFS-7035
> URL: https://issues.apache.org/jira/browse/HDFS-7035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: 2.6.1-candidate
> Fix For: 2.7.0, 2.6.1
>
> Attachments: HDFS-7035.000.combo.patch, HDFS-7035.000.patch, 
> HDFS-7035.001.combo.patch, HDFS-7035.001.patch, HDFS-7035.002.patch, 
> HDFS-7035.003.patch, HDFS-7035.003.patch, HDFS-7035.004.patch, 
> HDFS-7035.005.patch, HDFS-7035.007.patch, HDFS-7035.008.patch, 
> HDFS-7035.009.patch, HDFS-7035.010.patch, HDFS-7035.010.patch, 
> HDFS-7035.011.patch, HDFS-7035.012.patch, HDFS-7035.013.patch, 
> HDFS-7035.014.patch, HDFS-7035.015.patch, HDFS-7035.016.patch
>
>
> It refactors {{DataStorage}} and {{BlockPoolSliceStorage}} to reduce the 
> duplicate code and supports atomic adding volume operations. Also it 
> parallels loading data volume operation: each thread loads one volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6808) Add command line option to ask DataNode reload configuration.

2016-01-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120054#comment-15120054
 ] 

Arpit Agarwal commented on HDFS-6808:
-

Hi Xiaobing, we can continue the discussion on HDFS-7035 since [~eddyxu] 
responded there.

bq. I think the inconsistency between ReconfigurableBase#reconfigureProperty 
and ReconfigurationThread#run should be handled in ReconfigurationThread#run by 
adding the snippet as the following:
This makes the behavior consistent but does not allow handling situations where 
the effective value is different from the old and new values, like for the 
changed volumes. Lei had a good suggestion to have reconfigurePropertyImpl 
return the new effective value.

> Add command line option to ask DataNode reload configuration.
> -
>
> Key: HDFS-6808
> URL: https://issues.apache.org/jira/browse/HDFS-6808
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.6.0
>
> Attachments: HDFS-6808.000.combo.patch, HDFS-6808.000.patch, 
> HDFS-6808.001.combo.patch, HDFS-6808.001.patch, HDFS-6808.002.combo.patch, 
> HDFS-6808.002.patch, HDFS-6808.003.combo.txt, HDFS-6808.003.patch, 
> HDFS-6808.004.combo.patch, HDFS-6808.004.patch, HDFS-6808.005.combo.patch, 
> HDFS-6808.005.patch, HDFS-6808.006.combo.patch, HDFS-6808.006.patch, 
> HDFS-6808.007.combo.patch, HDFS-6808.007.patch, HDFS-6808.008.combo.patch, 
> HDFS-6808.008.patch, HDFS-6808.009.combo.patch, HDFS-6808.009.patch, 
> HDFS-6808.010.patch, HDFS-6808.011.patch
>
>
> The workflow of dynamically changing data volumes on DataNode is
> # Users manually changed {{dfs.datanode.data.dir}} in the configuration file
> # User use command line to notify DN to reload configuration and updates its 
> volumes. 
> This work adds command line support to notify DN to reload configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9706) Log more details in debug logs in BlockReceiver's constructor

2016-01-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120109#comment-15120109
 ] 

Hadoop QA commented on HDFS-9706:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 57 unchanged - 1 fixed = 57 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 163m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784684/HDFS-9706.02.patch |
| JIRA Issue | HDFS-9706 |
| Optional Tests |  asflicense  compile  

[jira] [Updated] (HDFS-9555) LazyPersistFileScrubber should still sleep if there are errors in the clear progress

2016-01-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9555:
-
Target Version/s: 2.6.5  (was: 2.6.4)

> LazyPersistFileScrubber should still sleep if there are errors in the clear 
> progress
> 
>
> Key: HDFS-9555
> URL: https://issues.apache.org/jira/browse/HDFS-9555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: 9555-v1.patch
>
>
> If LazyPersistFileScrubber.clearCorruptLazyPersistFiles throw an exception in 
> run(), there will be no sleep logic so it will restart immediately. However 
> it may be still fail so there are too many ERROR logs in namenode said 
> "Ignoring exception in LazyPersistFileScrubber".
> We need sleep if we catch the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9555) LazyPersistFileScrubber should still sleep if there are errors in the clear progress

2016-01-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120147#comment-15120147
 ] 

Junping Du commented on HDFS-9555:
--

Move all non-critical pending issues out of 2.6.4 into 2.6.5.

> LazyPersistFileScrubber should still sleep if there are errors in the clear 
> progress
> 
>
> Key: HDFS-9555
> URL: https://issues.apache.org/jira/browse/HDFS-9555
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: 9555-v1.patch
>
>
> If LazyPersistFileScrubber.clearCorruptLazyPersistFiles throw an exception in 
> run(), there will be no sleep logic so it will restart immediately. However 
> it may be still fail so there are too many ERROR logs in namenode said 
> "Ignoring exception in LazyPersistFileScrubber".
> We need sleep if we catch the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2016-01-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15120164#comment-15120164
 ] 

Zhe Zhang commented on HDFS-7955:
-

Thanks Rakesh! Not sure why Jenkins couldn't apply the patch on trunk, I just 
triggered again. Reviewing now.

> Improve naming of classes, methods, and variables related to block 
> replication and recovery
> ---
>
> Key: HDFS-7955
> URL: https://issues.apache.org/jira/browse/HDFS-7955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Zhe Zhang
>Assignee: Rakesh R
> Attachments: HDFS-7955-001.patch, HDFS-7955-002.patch
>
>
> Many existing names should be revised to avoid confusion when blocks can be 
> both replicated and erasure coded. This JIRA aims to solicit opinions on 
> making those names more consistent and intuitive.
> # In current HDFS _block recovery_ refers to the process of finalizing the 
> last block of a file, triggered by _lease recovery_. It is different from the 
> intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
> think of 2 options:
> #* Rename this process as _block finalization_ or _block completion_. I 
> prefer this option because this is literally not a recovery.
> #* If we want to keep existing terms unchanged we can name all EC recovery 
> and re-replication logics as _reconstruction_.  
> # As Kai [suggested | 
> https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
>  under HDFS-7369, several replication-based names should be made more generic:
> #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
> {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
> {{neededRecovery}}/{{neededReconstruction}}.
> #* {{PendingReplicationBlocks}}
> #* {{ReplicationMonitor}}
> I'm sure the above list is incomplete; discussions and comments are very 
> welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-01-27 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-9711:
---

 Summary: Integrate CSRF prevention filter in WebHDFS.
 Key: HDFS-9711
 URL: https://issues.apache.org/jira/browse/HDFS-9711
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode, webhdfs
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
against cross-site request forgery attacks.  This issue tracks integration of 
that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >