[jira] [Updated] (HDFS-10407) Erasure Coding: Rename CorruptReplicasMap to CorruptRedundancyMap in BlockManager to more generic

2016-06-02 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10407:

Status: Patch Available  (was: Open)

> Erasure Coding: Rename CorruptReplicasMap to CorruptRedundancyMap in 
> BlockManager to more generic
> -
>
> Key: HDFS-10407
> URL: https://issues.apache.org/jira/browse/HDFS-10407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10407-00.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager,
> - {{CorruptReplicasMap}} to {{CorruptRedundancyMap}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313548#comment-15313548
 ] 

Brahma Reddy Battula commented on HDFS-10367:
-

[~iwasakims]  thanks a lot for review and commit.. Raised HADOOP-13234 to 
handle random port improvement.

> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.005.patch, 
> HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10407) Erasure Coding: Rename CorruptReplicasMap to CorruptRedundancyMap in BlockManager to more generic

2016-06-02 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10407:

Attachment: HDFS-10407-00.patch

> Erasure Coding: Rename CorruptReplicasMap to CorruptRedundancyMap in 
> BlockManager to more generic
> -
>
> Key: HDFS-10407
> URL: https://issues.apache.org/jira/browse/HDFS-10407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10407-00.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager,
> - {{CorruptReplicasMap}} to {{CorruptRedundancyMap}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313516#comment-15313516
 ] 

Arpit Agarwal commented on HDFS-10341:
--

+1 for the branch-2 patch also, thanks [~ajisakaa].

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.branch-2.patch, HDFS-10341.01.patch, 
> HDFS-10341.02.patch, HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313512#comment-15313512
 ] 

Akira AJISAKA commented on HDFS-10341:
--

Thanks Xiaobing for reviewing and thanks Arpit for reviewing & committing. 
Attached a patch for branch-2.

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.branch-2.patch, HDFS-10341.01.patch, 
> HDFS-10341.02.patch, HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10341:
-
Attachment: HDFS-10341.01.branch-2.patch

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.branch-2.patch, HDFS-10341.01.patch, 
> HDFS-10341.02.patch, HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10341:
-
Status: Patch Available  (was: Reopened)

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.branch-2.patch, HDFS-10341.01.patch, 
> HDFS-10341.02.patch, HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reopened HDFS-10341:
--

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313497#comment-15313497
 ] 

Arpit Agarwal commented on HDFS-10478:
--

+1 thanks [~anu].

> DiskBalancer: resolve volume path names
> ---
>
> Key: HDFS-10478
> URL: https://issues.apache.org/jira/browse/HDFS-10478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10478-HDFS-1312.001.patch
>
>
> when creating a plan we don't fetch the name of volumes. But with -v option 
> we try to print those paths for users to see how the data is being moved. 
> This patch gets the volumes names before a plan is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313463#comment-15313463
 ] 

Yiqun Lin commented on HDFS-10471:
--

Thanks [~shahrs87] for review and thanks [~ajisakaa] for commit!

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-06-02 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313446#comment-15313446
 ] 

Takanobu Asanuma commented on HDFS-10449:
-

[~templedf], [~ajisakaa]
Thank you for reviewing and committing!

> TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2
> -
>
> Key: HDFS-10449
> URL: https://issues.apache.org/jira/browse/HDFS-10449
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Fix For: 2.9.0
>
> Attachments: HDFS-10449.branch-2.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
>   Time elapsed: 8.729 sec  <<< FAILURE!
> java.lang.AssertionError: No exception was generated while stopping sink even 
> though HDFS was unavailable
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
> {noformat}
> This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10446) Add interleaving tests for async DFS API

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313386#comment-15313386
 ] 

Hadoop QA commented on HDFS-10446:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 56m 52s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 50s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807848/HDFS-10446-HDFS-9924.000.patch
 |
| JIRA Issue | HDFS-10446 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 57872671d311 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 97e2449 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15638/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15638/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15638/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add interleaving tests for async DFS API
> 
>
> Key: HDFS-10446
> URL: https://issues.apache.org/jira/browse/HDFS-10446
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10446-HDFS-9924.000.patch
>

[jira] [Commented] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313356#comment-15313356
 ] 

Andras Bokor commented on HDFS-5059:


Thanks [~ajisakaa] for correction.

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-02 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313282#comment-15313282
 ] 

Benoy Antony commented on HDFS-10477:
-

If its possible to release lock per storage, then that's better. 
If not , I prefer the first version which does releases the lock per each 
datanode without the additional processing. 
The logs show that  the each node is processed in around 10 seconds. 

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated 

[jira] [Comment Edited] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-02 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313282#comment-15313282
 ] 

Benoy Antony edited comment on HDFS-10477 at 6/2/16 11:21 PM:
--

If its possible to release lock per storage, then that's better. 
If not , I prefer the first version which releases the lock per each datanode 
without the additional processing. 
The logs show that  the each node is processed in around 10 seconds. 


was (Author: benoyantony):
If its possible to release lock per storage, then that's better. 
If not , I prefer the first version which does releases the lock per each 
datanode without the additional processing. 
The logs show that  the each node is processed in around 10 seconds. 

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 

[jira] [Commented] (HDFS-10446) Add interleaving tests for async DFS API

2016-06-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313246#comment-15313246
 ] 

Xiaobing Zhou commented on HDFS-10446:
--

I posted the v000 patch, please kindly review it. Thanks.

> Add interleaving tests for async DFS API
> 
>
> Key: HDFS-10446
> URL: https://issues.apache.org/jira/browse/HDFS-10446
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10446-HDFS-9924.000.patch
>
>
> In a spirit of accuracy and correctness, async DFS APIs should also be tested 
> in the case of interleaving requests and responses. Especially, simulate 
> random order of async calls and random order of retrieving final results in 
> format of Future#get.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10446) Add interleaving tests for async DFS API

2016-06-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10446:
-
Status: Patch Available  (was: Open)

> Add interleaving tests for async DFS API
> 
>
> Key: HDFS-10446
> URL: https://issues.apache.org/jira/browse/HDFS-10446
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10446-HDFS-9924.000.patch
>
>
> In a spirit of accuracy and correctness, async DFS APIs should also be tested 
> in the case of interleaving requests and responses. Especially, simulate 
> random order of async calls and random order of retrieving final results in 
> format of Future#get.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10446) Add interleaving tests for async DFS API

2016-06-02 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10446:
-
Attachment: HDFS-10446-HDFS-9924.000.patch

> Add interleaving tests for async DFS API
> 
>
> Key: HDFS-10446
> URL: https://issues.apache.org/jira/browse/HDFS-10446
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10446-HDFS-9924.000.patch
>
>
> In a spirit of accuracy and correctness, async DFS APIs should also be tested 
> in the case of interleaving requests and responses. Especially, simulate 
> random order of async calls and random order of retrieving final results in 
> format of Future#get.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9859) Backport HDFS-6440 to branch-2

2016-06-02 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313207#comment-15313207
 ] 

Elliott Clark commented on HDFS-9859:
-

That would be great.

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-9859
> URL: https://issues.apache.org/jira/browse/HDFS-9859
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> HDFS-6440 is a very interesting feature for people who want to run HDFS in an 
> environment where machines have to join and leave a cluster. Until 3.0 is 
> close we should encourage that



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313168#comment-15313168
 ] 

Hadoop QA commented on HDFS-6937:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 35 
new + 529 unchanged - 4 fixed = 564 total (was 533) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 23 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807820/HDFS-6937.003.patch |
| JIRA Issue | HDFS-6937 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 932d720a33fd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 97e2449 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15637/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15637/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15637/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs | 

[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313115#comment-15313115
 ] 

Xiaobing Zhou commented on HDFS-10341:
--

Thank you [~arpitagarwal] for committing. [~ajisakaa] would you please also 
provide a patch for branch-2? Thanks.

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-06-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313112#comment-15313112
 ] 

Zhe Zhang commented on HDFS-10301:
--

Thanks for the discussions [~cmccabe], [~shv],[~redvine]

I think the challenge here is that different deployments have different levels 
of 1) BR split; 2) BRs interleaving; 3) zombie storages. E.g. BR split might be 
completely turned off in configuration, and BR interleaving heavily depends on 
how busy the NN is.

*a)* Patch v5 (from Colin) works well when BRs rarely interleave. In the worst 
case, a zombie storage could remain on NN for several full-BR cycles.
*b)* Patch v4 (from Vinitha) works well when BRs are rarely split (or split 
into many RPCs). The worst case is where each BR is split into a small number 
of RPCs -- if each full BR is split into {{n}} RPCs, the relative overhead is 1 
/ n, in terms of # of RPCs.
*c)* As Colin suggested, we can also extend first / last RPC in a full BR with 
the list of storages. By doing that we are adding overhead to every BR RPC (it 
needs to mark whether it has the list). Theoretically, the worst-case-overhead 
is to add this to an empty BR.

So overall, I think c) is the best long term solution, because its worst case 
scenario is the least likely to happen, and the consequence is the most 
tolerable. It is more complex than b) though. Given the size of the v4 patch, 
are we OK to go with b) (v4 patch) first and do c) as a follow-on?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.01.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9184) Logging HDFS operation's caller context into audit logs

2016-06-02 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9184:

Issue Type: New Feature  (was: Task)

> Logging HDFS operation's caller context into audit logs
> ---
>
> Key: HDFS-9184
> URL: https://issues.apache.org/jira/browse/HDFS-9184
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9184.000.patch, HDFS-9184.001.patch, 
> HDFS-9184.002.patch, HDFS-9184.003.patch, HDFS-9184.004.patch, 
> HDFS-9184.005.patch, HDFS-9184.006.patch, HDFS-9184.007.patch, 
> HDFS-9184.008.patch, HDFS-9184.009.patch
>
>
> For a given HDFS operation (e.g. delete file), it's very helpful to track 
> which upper level job issues it. The upper level callers may be specific 
> Oozie tasks, MR jobs, and hive queries. One scenario is that the namenode 
> (NN) is abused/spammed, the operator may want to know immediately which MR 
> job should be blamed so that she can kill it. To this end, the caller context 
> contains at least the application-dependent "tracking id".
> There are several existing techniques that may be related to this problem.
> 1. Currently the HDFS audit log tracks the users of the the operation which 
> is obviously not enough. It's common that the same user issues multiple jobs 
> at the same time. Even for a single top level task, tracking back to a 
> specific caller in a chain of operations of the whole workflow (e.g.Oozie -> 
> Hive -> Yarn) is hard, if not impossible.
> 2. HDFS integrated {{htrace}} support for providing tracing information 
> across multiple layers. The span is created in many places interconnected 
> like a tree structure which relies on offline analysis across RPC boundary. 
> For this use case, {{htrace}} has to be enabled at 100% sampling rate which 
> introduces significant overhead. Moreover, passing additional information 
> (via annotations) other than span id from root of the tree to leaf is a 
> significant additional work.
> 3. In [HDFS-4680 | https://issues.apache.org/jira/browse/HDFS-4680], there 
> are some related discussion on this topic. The final patch implemented the 
> tracking id as a part of delegation token. This protects the tracking 
> information from being changed or impersonated. However, kerberos 
> authenticated connections or insecure connections don't have tokens. 
> [HADOOP-8779] proposes to use tokens in all the scenarios, but that might 
> mean changes to several upstream projects and is a major change in their 
> security implementation.
> We propose another approach to address this problem. We also treat HDFS audit 
> log as a good place for after-the-fact root cause analysis. We propose to put 
> the caller id (e.g. Hive query id) in threadlocals. Specially, on client side 
> the threadlocal object is passed to NN as a part of RPC header (optional), 
> while on sever side NN retrieves it from header and put it to {{Handler}}'s 
> threadlocals. Finally in {{FSNamesystem}}, HDFS audit logger will record the 
> caller context for each operation. In this way, the existing code is not 
> affected.
> It is still challenging to keep "lying" client from abusing the caller 
> context. Our proposal is to add a {{signature}} field to the caller context. 
> The client choose to provide its signature along with the caller id. The 
> operator may need to validate the signature at the time of offline analysis. 
> The NN is not responsible for validating the signature online.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10341:
-
Fix Version/s: 3.0.0-alpha1

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10341:
-
Target Version/s:   (was: 3.0.0-alpha1)

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313020#comment-15313020
 ] 

Hudson commented on HDFS-10341:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9903 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9903/])
HDFS-10341. Add a metric to expose the timeout number of pending (arp: rev 
97e244947719d483c3f80521a00fec8e13dcb637)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReconstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md


> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-06-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-6937:
--
Attachment: HDFS-6937.003.patch

Upload my patch based on Yongjun's version 2.

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch, 
> HDFS-6937.003.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover

2016-06-02 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10220:
-
Priority: Major  (was: Minor)

> A large number of expired leases can make namenode unresponsive and cause 
> failover
> --
>
> Key: HDFS-10220
> URL: https://issues.apache.org/jira/browse/HDFS-10220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
> Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, 
> HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, 
> HADOOP-10220.006.patch, HADOOP-10220.007.patch, threaddump_zkfc.txt
>
>
> I have faced a namenode failover due to unresponsive namenode detected by the 
> zkfc with lot's of WARN messages (5 millions) like this one:
> _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All 
> existing blocks are COMPLETE, lease removed, file closed._
> On the threaddump taken by the zkfc there are lots of thread blocked due to a 
> lock.
> Looking at the code, there are a lock taken by the LeaseManager.Monitor when 
> some lease must be released. Due to the really big number of lease to be 
> released the namenode has taken too many times to release them blocking all 
> other tasks and making the zkfc thinking that the namenode was not 
> available/stuck.
> The idea of this patch is to limit the number of leased released each time we 
> check for lease so the lock won't be taken for a too long time period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-06-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-6937:
--
Assignee: Wei-Chiu Chuang  (was: Yongjun Zhang)

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-06-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313007#comment-15313007
 ] 

Wei-Chiu Chuang commented on HDFS-6937:
---

I am taking over Yongjun's patch because he'll not be able to access Internet 
for some time.

This is a great work and I took some time to understand. I think that instead 
of throwing an IOException to simulate the injection of checksum failure at the 
last datanode, it should enqueue a ERROR_CHECKSUM to indicate the checksum 
failure. Without it, the last DN will shutdown the connection, and the second 
DN in the pipeline will not understand it's checksum failure.

{code:title=BlockReceiver.java#sendAckUpstreamUnprotected}
if (ack == null) {
// A new OOB response is being sent from this node. Regardless of
// downstream nodes, reply should contain one reply.
replies = new int[] { myHeader };
  } else if (mirrorError) { // ack read error
int h = PipelineAck.combineHeader(datanode.getECN(), Status.SUCCESS);
int h1 = PipelineAck.combineHeader(datanode.getECN(), Status.ERROR);
replies = new int[] {h, h1};
  } else {
short ackLen = type == PacketResponderType.LAST_IN_PIPELINE ? 0 : ack
.getNumOfReplies();
replies = new int[ackLen + 1];
replies[0] = myHeader;
for (int i = 0; i < ackLen; ++i) {
  replies[i + 1] = ack.getHeaderFlag(i);
}
// If the mirror has reported that it received a corrupt packet,
// do self-destruct to mark myself bad, instead of making the
// mirror node bad. The mirror is guaranteed to be good without
// corrupt data on disk.
if (ackLen > 0 && PipelineAck.getStatusFromHeader(replies[1]) ==
  Status.ERROR_CHECKSUM) {
  throw new IOException("Shutting down writer and responder "
  + "since the down streams reported the data sent by this "
  + "thread is corrupt");
}
  }
{code}
In this piece of code, if the next DN shutdown the connection, it is always 
assumed the local DN is good.
{code}
int h = PipelineAck.combineHeader(datanode.getECN(), Status.SUCCESS);
int h1 = PipelineAck.combineHeader(datanode.getECN(), Status.ERROR);
replies = new int[] {h, h1};
{code}
On the other hand, if the next DN respond with a ERROR_CHECKSUM, it will thrown 
an IOException, and this will shutdown the connection with the previous DN in 
the pipeline. In the end, this will replace the middle datanode:

{code:title=DataStreamer.java#createBlockOutputStream}
// find the datanode that matches
if (firstBadLink.length() != 0) {
  for (int i = 0; i < nodes.length; i++) {
// NB: Unconditionally using the xfer addr w/o hostname
if (firstBadLink.equals(nodes[i].getXferAddr())) {
  errorState.setBadNodeIndex(i);
  break;
}
  }
}
{code}

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> 

[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10341:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: 3.0.0-alpha1  (was: )
  Status: Resolved  (was: Patch Available)

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312988#comment-15312988
 ] 

Arpit Agarwal commented on HDFS-10341:
--

+1 for the v4 patch. The test failures are unrelated. I pushed the fix to 
trunk. Thanks [~ajisakaa] and thanks [~xiaobingo] for the review.

The branch-2 patch will need changes due to refactoring introduced by HDFS-9869.


> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-845) Balancer documentation should not be in javadoc

2016-06-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312967#comment-15312967
 ] 

Mingliang Liu commented on HDFS-845:


[~szetszwo] Is this jira still valid? Thanks. Feel free to assign it to me if 
you don't actively work on this now.

> Balancer documentation should not be in javadoc
> ---
>
> Key: HDFS-845
> URL: https://issues.apache.org/jira/browse/HDFS-845
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, documentation
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Tsz Wo Nicholas Sze
>  Labels: newbie
>
> The best documentation for the balancer currently exists as a large JavaDoc 
> on the Balancer class. This is less than useful, especially since we no 
> longer generate javadocs for HDFS as part of the build process. We should 
> either extract extract it into a forrest style doc, or else change the 
> production javadoc to include certain whitelisted HDFS classes that we think 
> users will want to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312946#comment-15312946
 ] 

Hadoop QA commented on HDFS-10477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 38s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestAsyncHDFSWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807796/HDFS-10477.002.patch |
| JIRA Issue | HDFS-10477 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b5abad8b0a8a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ead61c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15635/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15635/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15635/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15635/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312926#comment-15312926
 ] 

Hudson commented on HDFS-10471:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9902 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9902/])
HDFS-10471. DFSAdmin#SetQuotaCommand's help msg is not correct. (aajisaka: rev 
1df6f5735c9d85e644d99d3ebfc4459490657004)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-10480:
-

Assignee: Rushabh S Shah

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Rushabh S Shah
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10471:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution!

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312880#comment-15312880
 ] 

Akira AJISAKA commented on HDFS-10471:
--

The test failure looks unrelated to the patch, committing this.

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312862#comment-15312862
 ] 

Hadoop QA commented on HDFS-10478:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
46s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 5s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807600/HDFS-10478-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10478 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a13808d9aaa1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 20d8cf7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15634/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15634/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15634/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15634/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: resolve volume path names
> ---
>
> Key: HDFS-10478
> URL: https://issues.apache.org/jira/browse/HDFS-10478
> Project: Hadoop HDFS
>  Issue 

[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-02 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312854#comment-15312854
 ] 

Xiaobing Zhou commented on HDFS-10341:
--

[~ajisakaa] patch v04 looks good. Can you verify if the test failures are 
related to the patch? Thank you.

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312848#comment-15312848
 ] 

Hadoop QA commented on HDFS-10462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HDFS-10462 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807802/HDFS-10462-002.patch |
| JIRA Issue | HDFS-10462 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15636/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-02 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HDFS-10462:

Attachment: HDFS-10462-002.patch

Separated out the changes of HADOOP-12666 from this patch. Now this patch just 
contains the changes that heed to go on top of HADOOP-12666.


> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10480:
--
Description: Currently there is no easy way to obtain the list of active 
leases or files being written. It will be nice if we have an admin command to 
list open files and their lease holders.  (was: Currently there is a no easy 
way to obtain the list of active leases or files being written. It will be nice 
if we have an admin command to list open files and their lease holders.)

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> Currently there is no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10471:
-
Hadoop Flags: Reviewed
 Component/s: documentation

LGTM, +1.

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-02 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated HDFS-10477:
-
Attachment: HDFS-10477.002.patch

[~kihwal] good idea, thanks.
We can release lock in processExtraRedundancyBlocksOnReCommission after it 
scanned numBlocksPerIteration(default is 1) blocks.

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.002.patch, HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated blocks on 10.142.27.14:1004 during recommissioning
> 2016-05-26 20:13:25,370 INFO 
> 

[jira] [Updated] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10449:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2. Thanks [~tasanuma0829] for the contribution and 
thanks [~templedf] for the review!

> TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2
> -
>
> Key: HDFS-10449
> URL: https://issues.apache.org/jira/browse/HDFS-10449
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Fix For: 2.9.0
>
> Attachments: HDFS-10449.branch-2.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
>   Time elapsed: 8.729 sec  <<< FAILURE!
> java.lang.AssertionError: No exception was generated while stopping sink even 
> though HDFS was unavailable
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
> {noformat}
> This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312791#comment-15312791
 ] 

Akira AJISAKA commented on HDFS-10449:
--

LGTM, +1.

> TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2
> -
>
> Key: HDFS-10449
> URL: https://issues.apache.org/jira/browse/HDFS-10449
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-10449.branch-2.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
>   Time elapsed: 8.729 sec  <<< FAILURE!
> java.lang.AssertionError: No exception was generated while stopping sink even 
> though HDFS was unavailable
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
> {noformat}
> This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10479:
-
Labels: newbie  (was: )

> help of stat is confusing
> -
>
> Key: HDFS-10479
> URL: https://issues.apache.org/jira/browse/HDFS-10479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Priority: Trivial
>  Labels: newbie
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10478:

Status: Patch Available  (was: Open)

> DiskBalancer: resolve volume path names
> ---
>
> Key: HDFS-10478
> URL: https://issues.apache.org/jira/browse/HDFS-10478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10478-HDFS-1312.001.patch
>
>
> when creating a plan we don't fetch the name of volumes. But with -v option 
> we try to print those paths for users to see how the data is being moved. 
> This patch gets the volumes names before a plan is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-5059.
-
Resolution: Duplicate

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312679#comment-15312679
 ] 

Akira AJISAKA commented on HDFS-5059:
-

Hi [~boky01], I agree that this is a duplicate of HDFS-5111. Thank you for 
closing this issue, but the resolution should be "Duplicate" instead of 
"Fixed". "Fixed" is used when the source code is actually changed in this 
issue. I'll reopen this to change the resolution.

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reopened HDFS-5059:
-

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312546#comment-15312546
 ] 

Kihwal Lee edited comment on HDFS-10480 at 6/2/16 4:14 PM:
---

While debugging issues, I had to dump a huge fsimage to get the list of open 
files. I was looking for files being open for a long time, so it was okay but 
took a long time to get them.  The list may surprise you if there are runaway 
clients keeping renewing leases. I've seen something open for many months, 
surviving multiple rolling upgrades. They also pose risk of data loss since 
even the finalized blocks don't get re-replicated if the file is under 
construction.  If confirmed to be a "forgotten" file that is left open, the 
admin can use {{hdfs debug recoverLease}} command to revoke the lease and close 
the file.


was (Author: kihwal):
While debugging issues, I had to dump a huge fsimage to get the list of open 
files. I was looking for files being open for a long time, so it was okay but 
took a long time to get them.  The list may surprise you if there are runaway 
clients keeping renewing leases. I've seen something open for many months, 
surviving multiple rolling upgrades. They also pose risk of data loss since 
even the finalized blocks don't get re-replicated if the file is under 
construction.  If conformed to be a "forgotten" file that is left open, the 
admin can use {{hdfs debug recoverLease}} command to revoke the lease and close 
the file.

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> Currently there is a no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312546#comment-15312546
 ] 

Kihwal Lee edited comment on HDFS-10480 at 6/2/16 4:11 PM:
---

While debugging issues, I had to dump a huge fsimage to get the list of open 
files. I was looking for files being open for a long time, so it was okay but 
took a long time to get them.  The list may surprise you if there are runaway 
clients keeping renewing leases. I've seen something open for many months, 
surviving multiple rolling upgrades. They also pose risk of data loss since 
even the finalized blocks don't get re-replicated if the file is under 
construction.  If conformed to be a "forgotten" file that is left open, the 
admin can use {{hdfs debug recoverLease}} command to revoke the lease and close 
the file.


was (Author: kihwal):
While debugging issues, I had to dump a huge fsimage to get the list of open 
files. I was looking for files being open for a long time, so it was okay but 
took a long time to get them.  The list may surprise you if there are runaway 
clients keeping renewing leases. I've seen something open for many months, 
surviving multiple rolling upgrades. They also pose risk of data loss since 
even the finalized blocks don't get re-replicated if the file is under 
construction.

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> Currently there is a no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312546#comment-15312546
 ] 

Kihwal Lee commented on HDFS-10480:
---

While debugging issues, I had to dump a huge fsimage to get the list of open 
files. I was looking for files being open for a long time, so it was okay but 
took a long time to get them.  The list may surprise you if there are runaway 
clients keeping renewing leases. I've seen something open for many months, 
surviving multiple rolling upgrades. They also pose risk of data loss since 
even the finalized blocks don't get re-replicated if the file is under 
construction.

> Add an admin command to list currently open files
> -
>
> Key: HDFS-10480
> URL: https://issues.apache.org/jira/browse/HDFS-10480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> Currently there is a no easy way to obtain the list of active leases or files 
> being written. It will be nice if we have an admin command to list open files 
> and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10480) Add an admin command to list currently open files

2016-06-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-10480:
-

 Summary: Add an admin command to list currently open files
 Key: HDFS-10480
 URL: https://issues.apache.org/jira/browse/HDFS-10480
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


Currently there is a no easy way to obtain the list of active leases or files 
being written. It will be nice if we have an admin command to list open files 
and their lease holders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline

2016-06-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312510#comment-15312510
 ] 

Wei-Chiu Chuang commented on HDFS-9805:
---

Thanks [~ghelmling] for the patch! I believe we've hit the same issue and am 
interested to move this forward.
Would you mind if I rebase the patch and add the configs and the unit tests as 
Colin suggested?

> TCP_NODELAY not set before SASL handshake in data transfer pipeline
> ---
>
> Key: HDFS-9805
> URL: https://issues.apache.org/jira/browse/HDFS-9805
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9805.002.patch, HDFS-9805.003.patch
>
>
> There are a few places in the DN -> DN block transfer pipeline where 
> TCP_NODELAY is not set before doing a SASL handshake:
> * in {{DataNode.DataTransfer::run()}}
> * in {{DataXceiver::replaceBlock()}}
> * in {{DataXceiver::writeBlock()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Xiaohe Lan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohe Lan updated HDFS-10479:
--
Description: 
%b is actually printing the size of a file in bytes, while in help it says 
filesize in blocks.

{code}
hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b)
...
{code}

  was:
%b is actually printing the size of a file in bytes, while in help it says 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.


> help of stat is confusing
> -
>
> Key: HDFS-10479
> URL: https://issues.apache.org/jira/browse/HDFS-10479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Priority: Trivial
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Xiaohe Lan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohe Lan updated HDFS-10479:
--
Description: 
%b is actually printing the size of a file in bytes, while in help it says 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.

  was:
%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.


> help of stat is confusing
> -
>
> Key: HDFS-10479
> URL: https://issues.apache.org/jira/browse/HDFS-10479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Priority: Trivial
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b), type (%F), group name of owner (%g),
>   name (%n), block size (%o), replication (%r), user name
>   of owner (%u), modification date (%y, %Y).
>   %y shows UTC date as "-MM-dd HH:mm:ss" and
>   %Y shows milliseconds since January 1, 1970 UTC.
>   If the format is not specified, %y is used by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-06-02 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Assignee: James Clampffer

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312438#comment-15312438
 ] 

Kihwal Lee commented on HDFS-10477:
---

It will be better if the locking is done per storage instead of per node.

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated blocks on 10.142.27.14:1004 during recommissioning
> 2016-05-26 20:13:25,370 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.28:1004
> 2016-05-26 

[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-06-02 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-9271:

Description: 
Expose via C and C++ API:
* mkdirs
* rename
* delete
* stat
* chmod
* chown
* getListing
* setOwner


  was:
Expose via C and C++ API:
* mkdirs
* rename
* delete
* stat
* chmod
* chown
* getListing
* setOwner
* fsync



> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9271) Implement basic NN operations

2016-06-02 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312415#comment-15312415
 ] 

Bob Hansen commented on HDFS-9271:
--

Some work was done in HDFS-10464 and HDFS-10465.

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner
> * fsync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-06-02 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9271:
-
Assignee: Anatoli Shein  (was: James Clampffer)

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Anatoli Shein
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner
> * fsync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-02 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch, 
> HDFS-10464.HDFS-8707.001.patch, HDFS-10464.HDFS-8707.002.patch, 
> HDFS-10464.HDFS-8707.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-02 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312410#comment-15312410
 ] 

Bob Hansen commented on HDFS-10464:
---

+1.  Looks good.  Thanks, Anatoli.

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch, 
> HDFS-10464.HDFS-8707.001.patch, HDFS-10464.HDFS-8707.002.patch, 
> HDFS-10464.HDFS-8707.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Xiaohe Lan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohe Lan updated HDFS-10479:
--
Description: 
%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.

  was:
%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

~~~
hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.
~~~


> help of stat is confusing
> -
>
> Key: HDFS-10479
> URL: https://issues.apache.org/jira/browse/HDFS-10479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Priority: Trivial
>
> %b is actually printing the size of a file in bytes, while in help it stats 
> filesize in blocks.
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b), type (%F), group name of owner (%g),
>   name (%n), block size (%o), replication (%r), user name
>   of owner (%u), modification date (%y, %Y).
>   %y shows UTC date as "-MM-dd HH:mm:ss" and
>   %Y shows milliseconds since January 1, 1970 UTC.
>   If the format is not specified, %y is used by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Xiaohe Lan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaohe Lan updated HDFS-10479:
--
Description: 
%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

~~~
hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.
~~~

  was:
%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.


> help of stat is confusing
> -
>
> Key: HDFS-10479
> URL: https://issues.apache.org/jira/browse/HDFS-10479
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Priority: Trivial
>
> %b is actually printing the size of a file in bytes, while in help it stats 
> filesize in blocks.
> ~~~
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b), type (%F), group name of owner (%g),
>   name (%n), block size (%o), replication (%r), user name
>   of owner (%u), modification date (%y, %Y).
>   %y shows UTC date as "-MM-dd HH:mm:ss" and
>   %Y shows milliseconds since January 1, 1970 UTC.
>   If the format is not specified, %y is used by default.
> ~~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10479) help of stat is confusing

2016-06-02 Thread Xiaohe Lan (JIRA)
Xiaohe Lan created HDFS-10479:
-

 Summary: help of stat is confusing
 Key: HDFS-10479
 URL: https://issues.apache.org/jira/browse/HDFS-10479
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.2
Reporter: Xiaohe Lan
Priority: Trivial


%b is actually printing the size of a file in bytes, while in help it stats 
filesize in blocks.

hdfs dfs -help stat
-stat [format]  ... :
  Print statistics about the file/directory at 
  in the specified format. Format accepts filesize in
  blocks (%b), type (%F), group name of owner (%g),
  name (%n), block size (%o), replication (%r), user name
  of owner (%u), modification date (%y, %Y).
  %y shows UTC date as "-MM-dd HH:mm:ss" and
  %Y shows milliseconds since January 1, 1970 UTC.
  If the format is not specified, %y is used by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-10367:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks, [~brahmareddy].

> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.005.patch, 
> HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312270#comment-15312270
 ] 

Hadoop QA commented on HDFS-10471:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 
new + 204 unchanged - 24 fixed = 208 total (was 228) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807699/HDFS-10471.003.patch |
| JIRA Issue | HDFS-10471 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4ca5b2d7488f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99675e0 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15633/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15633/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15633/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15633/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15633/console |
| 

[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-06-02 Thread Nikhil Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312249#comment-15312249
 ] 

Nikhil Joshi commented on HDFS-7240:


Unsubscribe


Nikhil
@nikhilj0shi 




> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312207#comment-15312207
 ] 

Hudson commented on HDFS-10367:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9899 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9899/])
HDFS-10367. TestDFSShell.testMoveWithTargetPortEmpty fails with Address 
(iwasakims: rev aadb77e412ab9d4ad05a0bd8b37d547ba5adad03)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java


> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.005.patch, 
> HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2016-06-02 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312179#comment-15312179
 ] 

Masatake Iwasaki commented on HDFS-9476:


Thanks, [~ajisakaa].

> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.7.3
>
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312173#comment-15312173
 ] 

Masatake Iwasaki commented on HDFS-10367:
-

+1, committing this.

> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.005.patch, 
> HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10471:
-
Attachment: HDFS-10471.003.patch

Thanks [~ajisakaa] for reivewing. Attach a patch fot addressing the comment.

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312155#comment-15312155
 ] 

Yiqun Lin edited comment on HDFS-10471 at 6/2/16 11:30 AM:
---

Thanks [~ajisakaa] for reivewing. Attach a patch for addressing the comment.


was (Author: linyiqun):
Thanks [~ajisakaa] for reivewing. Attach a patch fot addressing the comment.

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch, 
> HDFS-10471.003.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3584) Blocks are getting marked as corrupt with append operation under high load.

2016-06-02 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312138#comment-15312138
 ] 

Harsh J commented on HDFS-3584:
---

HDFS-10240 appears to report a similar issue.

> Blocks are getting marked as corrupt with append operation under high load.
> ---
>
> Key: HDFS-3584
> URL: https://issues.apache.org/jira/browse/HDFS-3584
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Brahma Reddy Battula
>
> Scenario:
> = 
> 1. There are 2 clients cli1 and cli2 cli1 write a file F1 and not closed
> 2. The cli2 will call append on unclosed file and triggers a leaserecovery
> 3. Cli1 is closed
> 4. Lease recovery is completed and with updated GS in DN and got BlockReport 
> since there is a mismatch in GS the block got corrupted
> 5. Now we got a CommitBlockSync this will also fail since the File is already 
> closed by cli1 and state in NN is Finalized



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312116#comment-15312116
 ] 

Hadoop QA commented on HDFS-10367:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 52s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 55s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807649/HDFS-10367.005.patch |
| JIRA Issue | HDFS-10367 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 36f338262329 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 16b1cc7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15632/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15632/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15632/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDFSShell.testMoveWithTargetPortEmpty fails with 

[jira] [Commented] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2016-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312084#comment-15312084
 ] 

Hudson commented on HDFS-9476:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9897 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9897/])
HDFS-9476. TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage (aajisaka: rev 
69555fca066815053dd9168ebe15868a5c02cdcd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java


> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.7.3
>
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-9476:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

Committed to branch-2.7 and above. Thanks [~iwasakims] for finding the root 
cause and updating the patch, and thanks [~xiaobingo] and [~walter.k.su] for 
reviewing.

> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Fix For: 2.7.3
>
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312056#comment-15312056
 ] 

Akira AJISAKA commented on HDFS-9476:
-

+1, committing this.

> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-02 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311990#comment-15311990
 ] 

Weiwei Yang commented on HDFS-10440:


Sure [~kihwal] I will add Reserved Space for Replicas and Blocks in storage 
section. And I will add another column in Block Pool section, to indicate the 
actor state, display the state of BPServiceActor.RunningState. Will upload a 
patch with screen shots shortly after I tested on a trunk build. 

Thanks a lot for suggestions.

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, datanode_html.001.jpg, 
> datanode_utilities.001.jpg, dn_web_ui_mockup.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9476) TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-9476:

Assignee: Masatake Iwasaki  (was: Akira AJISAKA)

> TestDFSUpgradeFromImage#testUpgradeFromRel1BBWImage occasionally fail
> -
>
> Key: HDFS-9476
> URL: https://issues.apache.org/jira/browse/HDFS-9476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9476.002.patch, HDFS-9476.01.patch
>
>
> This test occasionally fail. For example, the most recent one is:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2587/
> Error Message
> {noformat}
> Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
> {noformat}
> Stacktrace
> {noformat}
> java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-1371507683-67.195.81.153-1448798439809:blk_7162739548153522810_1020;
>  getBlockSize()=1024; corrupt=false; offset=0; 
> locs=[DatanodeInfoWithStorage[127.0.0.1:33080,DS-c5eaf2b4-2ee6-419d-a8a0-44a5df5ef9a1,DISK]]}
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
>   at 
> org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10471:
-
Target Version/s: 2.8.0

> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10471) DFSAdmin#SetQuotaCommand's help msg is not correct

2016-06-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311967#comment-15311967
 ] 

Akira AJISAKA commented on HDFS-10471:
--

Thanks [~linyiqun] for updating the patch.
{code}
  "\t\t1. quota is not a positive integer, or\n" +
{code}
1. Zero space quota is valid (HDFS-10242), so would you update this sentence as 
"quota is not a positive integer or zero"?
2. (minor nit) Would you fix the checkstyle warnings?


> DFSAdmin#SetQuotaCommand's help msg is not correct
> --
>
> Key: HDFS-10471
> URL: https://issues.apache.org/jira/browse/HDFS-10471
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10471.001.patch, HDFS-10471.002.patch
>
>
> The help message of the command that related with SetQuota is not show 
> correct. In message, the name {{quota}} was showed as {{N}}. The {{N}} was 
> not appeared before.
> {noformat}
> -setQuota  ...: Set the quota  for each 
> directory .
>   The directory quota is a long integer that puts a hard limit
>   on the number of names in the directory tree
>   For each directory, attempt to set the quota. An error will be 
> reported if
>   1. N is not a positive integer, or
>   2. User is not an administrator, or
>   3. The directory does not exist or is a file.
>   Note: A quota of 1 would force the directory to remain empty.
> {noformat}
> The command {{-setSpaceQuota}} also has similar problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-02 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-10367:

Attachment: HDFS-10367.005.patch

> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.005.patch, 
> HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10331) Use java.util.zip.CRC32 for java8 or above in libhadoop

2016-06-02 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311883#comment-15311883
 ] 

Haohui Mai commented on HDFS-10331:
---

Should we remove the unused code then?

> Use java.util.zip.CRC32 for java8 or above in libhadoop
> ---
>
> Key: HDFS-10331
> URL: https://issues.apache.org/jira/browse/HDFS-10331
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, hdfs-client
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>
> In java8, performance of intrinsic CRC32 has been dramatically improved.
> See: https://bugs.openjdk.java.net/browse/JDK-7088419
> I carried an in-memory benchmark of throughput, on a server with two E5-2630 
> v2 cpus, results:
> (single threaded)
> java7  java.util.zip.CRC32: 0.81GB/s
> hdfs DataChecksum, native: 1.46GB/s
> java8  java.util.zip.CRC32: 2.39GB/s
> hdfs DataChecksum, CRC32 on java8: 2.39GB/s
> IMHO I think we could either:
> A) provide a configuration for user to switch CRC32 implementations;
> or B) On java8 or above, always use intrinsic CRC32.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org