[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311759#comment-15311759
 ] 

Zhe Zhang commented on HDFS-10458:
--

Thanks for the comment [~shv]. I started updating the patch using 
{{encryptionZones == null}} as condition check, but found that we would have to 
update all references to {{encryptionZones}} with null-handling logic. E.g., in 
{{listEncryptionZones}} and {{createEncryptionZone}}, would need to handle null 
value. It's probably not worth that much code change to avoid adding a field. 
But LMK your thoughts.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10458.00.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-01 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10478:

Attachment: HDFS-10478-HDFS-1312.001.patch

The test changes are not related to this patch. But instead of filing a 
different JIRA I am adding another test.


> DiskBalancer: resolve volume path names
> ---
>
> Key: HDFS-10478
> URL: https://issues.apache.org/jira/browse/HDFS-10478
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10478-HDFS-1312.001.patch
>
>
> when creating a plan we don't fetch the name of volumes. But with -v option 
> we try to print those paths for users to see how the data is being moved. 
> This patch gets the volumes names before a plan is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-6962:
-
Attachment: HDFS-6962.001.patch

Patch 001:
* Just a rebase of HDFS-6962.1.patch
* Pass build

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.1.patch
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311555#comment-15311555
 ] 

Hadoop QA commented on HDFS-10462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s 
{color} | {color:red} root: The patch generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 25s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 36s 
{color} | {color:green} hadoop-tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-tools-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 57s {color} 

[jira] [Commented] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311545#comment-15311545
 ] 

Hadoop QA commented on HDFS-10477:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807561/HDFS-10477.patch |
| JIRA Issue | HDFS-10477 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 15eea8fda5c9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 16b1cc7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15630/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15630/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15630/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15630/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> 

[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-06-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311534#comment-15311534
 ] 

Anu Engineer commented on HDFS-7240:


As promised earlier, we would like to host a ozone design review meeting. 
Agenda is to discuss ozone design and future work.
{noformat}
Anu Engineer is inviting you to a scheduled Zoom meeting. 

Topic: Ozone design review
Time: Jun 9, 2016 2:00 PM (GMT-7:00) Pacific Time (US and Canada) 

Join from PC, Mac, Linux, iOS or Android: 
https://hortonworks.zoom.us/j/679978944

Or join by phone:

+1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)
+1 855 880 1246 (US Toll Free)
+1 888 974 9888 (US Toll Free)
Meeting ID: 679 978 944 
International numbers available: 
https://hortonworks.zoom.us/zoomconference?m=VJJvnfHtsvBoBXaaCftwMsOm8b-4ZkBj 
{noformat}

[~drankye] [~steve_l] [~ajisakaa] My apologies for a very north america centric 
time for the meeting, We will host another follow up meeting for contributors 
from Asia and Europe. 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-01 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HDFS-10462:

Status: Patch Available  (was: Open)

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-01 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HDFS-10462:

Attachment: HDFS-10462-001.patch

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-01 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HDFS-10462:

Assignee: Atul Sikaria
  Status: Patch Available  (was: Open)

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys

2016-06-01 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HDFS-10462:

Status: Open  (was: Patch Available)

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HDFS-10462
> URL: https://issues.apache.org/jira/browse/HDFS-10462
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10478) DiskBalancer: resolve volume path names

2016-06-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10478:
---

 Summary: DiskBalancer: resolve volume path names
 Key: HDFS-10478
 URL: https://issues.apache.org/jira/browse/HDFS-10478
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


when creating a plan we don't fetch the name of volumes. But with -v option we 
try to print those paths for users to see how the data is being moved. This 
patch gets the volumes names before a plan is persisted.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-01 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated HDFS-10477:
-
Attachment: HDFS-10477.patch

This patch will release write lock after stopped decommission one DataNode, so 
other handler will have chance to get the write lock to prevent lock Namesystem 
too long.

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated blocks on 10.142.27.14:1004 during recommissioning
> 2016-05-26 20:13:25,370 INFO 
> 

[jira] [Updated] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-01 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated HDFS-10477:
-
Status: Patch Available  (was: Open)

> Stop decommission a rack of DataNodes caused NameNode fail over to standby
> --
>
> Key: HDFS-10477
> URL: https://issues.apache.org/jira/browse/HDFS-10477
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: HDFS-10477.patch
>
>
> In our cluster, when we stop decommissioning a rack which have 46 DataNodes, 
> it locked Namesystem for about 7 minutes as below log shows:
> {code}
> 2016-05-26 20:11:41,697 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.27:1004
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 285258 over-replicated blocks on 10.142.27.27:1004 during recommissioning
> 2016-05-26 20:11:51,171 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.118:1004
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 279923 over-replicated blocks on 10.142.27.118:1004 during recommissioning
> 2016-05-26 20:11:59,972 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.113:1004
> 2016-05-26 20:12:09,007 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 294307 over-replicated blocks on 10.142.27.113:1004 during recommissioning
> 2016-05-26 20:12:09,008 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.117:1004
> 2016-05-26 20:12:18,055 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 314381 over-replicated blocks on 10.142.27.117:1004 during recommissioning
> 2016-05-26 20:12:18,056 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.130:1004
> 2016-05-26 20:12:25,938 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 272779 over-replicated blocks on 10.142.27.130:1004 during recommissioning
> 2016-05-26 20:12:25,939 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.121:1004
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 287248 over-replicated blocks on 10.142.27.121:1004 during recommissioning
> 2016-05-26 20:12:34,134 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.33:1004
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 299868 over-replicated blocks on 10.142.27.33:1004 during recommissioning
> 2016-05-26 20:12:43,020 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.137:1004
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 303914 over-replicated blocks on 10.142.27.137:1004 during recommissioning
> 2016-05-26 20:12:52,220 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.51:1004
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 281175 over-replicated blocks on 10.142.27.51:1004 during recommissioning
> 2016-05-26 20:13:00,362 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.12:1004
> 2016-05-26 20:13:08,756 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 274880 over-replicated blocks on 10.142.27.12:1004 during recommissioning
> 2016-05-26 20:13:08,757 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.15:1004
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 286334 over-replicated blocks on 10.142.27.15:1004 during recommissioning
> 2016-05-26 20:13:17,185 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.14:1004
> 2016-05-26 20:13:25,369 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 
> 280219 over-replicated blocks on 10.142.27.14:1004 during recommissioning
> 2016-05-26 20:13:25,370 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
> Decommissioning 10.142.27.28:1004
> 2016-05-26 20:13:33,768 INFO 
> 

[jira] [Created] (HDFS-10477) Stop decommission a rack of DataNodes caused NameNode fail over to standby

2016-06-01 Thread yunjiong zhao (JIRA)
yunjiong zhao created HDFS-10477:


 Summary: Stop decommission a rack of DataNodes caused NameNode 
fail over to standby
 Key: HDFS-10477
 URL: https://issues.apache.org/jira/browse/HDFS-10477
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.2
Reporter: yunjiong zhao
Assignee: yunjiong zhao


In our cluster, when we stop decommissioning a rack which have 46 DataNodes, it 
locked Namesystem for about 7 minutes as below log shows:
{code}
2016-05-26 20:11:41,697 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.27:1004
2016-05-26 20:11:51,171 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 285258 
over-replicated blocks on 10.142.27.27:1004 during recommissioning
2016-05-26 20:11:51,171 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.118:1004
2016-05-26 20:11:59,972 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 279923 
over-replicated blocks on 10.142.27.118:1004 during recommissioning
2016-05-26 20:11:59,972 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.113:1004
2016-05-26 20:12:09,007 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 294307 
over-replicated blocks on 10.142.27.113:1004 during recommissioning
2016-05-26 20:12:09,008 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.117:1004
2016-05-26 20:12:18,055 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 314381 
over-replicated blocks on 10.142.27.117:1004 during recommissioning
2016-05-26 20:12:18,056 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.130:1004
2016-05-26 20:12:25,938 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 272779 
over-replicated blocks on 10.142.27.130:1004 during recommissioning
2016-05-26 20:12:25,939 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.121:1004
2016-05-26 20:12:34,134 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 287248 
over-replicated blocks on 10.142.27.121:1004 during recommissioning
2016-05-26 20:12:34,134 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.33:1004
2016-05-26 20:12:43,020 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 299868 
over-replicated blocks on 10.142.27.33:1004 during recommissioning
2016-05-26 20:12:43,020 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.137:1004
2016-05-26 20:12:52,220 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 303914 
over-replicated blocks on 10.142.27.137:1004 during recommissioning
2016-05-26 20:12:52,220 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.51:1004
2016-05-26 20:13:00,362 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 281175 
over-replicated blocks on 10.142.27.51:1004 during recommissioning
2016-05-26 20:13:00,362 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.12:1004
2016-05-26 20:13:08,756 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 274880 
over-replicated blocks on 10.142.27.12:1004 during recommissioning
2016-05-26 20:13:08,757 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.15:1004
2016-05-26 20:13:17,185 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 286334 
over-replicated blocks on 10.142.27.15:1004 during recommissioning
2016-05-26 20:13:17,185 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.14:1004
2016-05-26 20:13:25,369 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 280219 
over-replicated blocks on 10.142.27.14:1004 during recommissioning
2016-05-26 20:13:25,370 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.28:1004
2016-05-26 20:13:33,768 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 280623 
over-replicated blocks on 10.142.27.28:1004 during recommissioning
2016-05-26 20:13:33,769 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Stop 
Decommissioning 10.142.27.119:1004
2016-05-26 20:13:42,816 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Invalidated 294675 
over-replicated blocks on 10.142.27.119:1004 during recommissioning
2016-05-26 20:13:42,816 INFO 

[jira] [Commented] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311352#comment-15311352
 ] 

Hadoop QA commented on HDFS-10464:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
41s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 15s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 43s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 0s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807537/HDFS-10464.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-10464 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8db3c0d180ff 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 4376526 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15629/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15629/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>

[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311347#comment-15311347
 ] 

Konstantin Shvachko commented on HDFS-10458:


I agree we should not optimize when encryption zones are actually used.
Can we just set {{encryptionZones = null}} in the constructor, which would mean 
that there are no encryption zones yet, and initialize it when the first zone 
is added. That way we can avoid adding extra field.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10458.00.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311343#comment-15311343
 ] 

Hadoop QA commented on HDFS-10476:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
52s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 22s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807532/HDFS-10476-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10476 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ba1cc39766d5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 20d8cf7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15627/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15627/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15627/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15627/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: Plan command output directory should be a sub-directory
> 

[jira] [Commented] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311338#comment-15311338
 ] 

Hadoop QA commented on HDFS-10464:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
54s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 32s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 30s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807537/HDFS-10464.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-10464 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux bc14a61f62ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 4376526 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15628/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15628/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  

[jira] [Commented] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311275#comment-15311275
 ] 

Hadoop QA commented on HDFS-10468:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-hdfs-project: The patch generated 1 new + 125 
unchanged - 2 fixed = 126 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 51s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 101m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807518/HDFS-10468.002.patch |
| JIRA Issue | HDFS-10468 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1caa1310432e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0bc05e4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15626/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15626/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15626/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311248#comment-15311248
 ] 

Arpit Agarwal commented on HDFS-10476:
--

Hi [~anu], we should use Path.SEPARATOR instead of hard-coded "/".

+1 pending Jenkins with that fixed, feel free to fix while committing.

> DiskBalancer: Plan command output directory should be a sub-directory
> -
>
> Key: HDFS-10476
> URL: https://issues.apache.org/jira/browse/HDFS-10476
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10476-HDFS-1312.001.patch
>
>
> The plan command output is is placed in a default directory of 
> /system/diskbalancer instead it should be placed in 
> /system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Attachment: HDFS-10464.HDFS-8707.003.patch

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch, 
> HDFS-10464.HDFS-8707.001.patch, HDFS-10464.HDFS-8707.002.patch, 
> HDFS-10464.HDFS-8707.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Attachment: HDFS-10464.HDFS-8707.002.patch

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch, 
> HDFS-10464.HDFS-8707.001.patch, HDFS-10464.HDFS-8707.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Attachment: HDFS-10464.HDFS-8707.001.patch

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch, 
> HDFS-10464.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311220#comment-15311220
 ] 

Andrew Wang commented on HDFS-9924:
---

Also to be clear, I'm only talking about backing out the changes that are part 
of the user-facing API. We can leave the RPC engine changes, since like you 
said it seems stable.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-01 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10476:

Status: Patch Available  (was: Open)

> DiskBalancer: Plan command output directory should be a sub-directory
> -
>
> Key: HDFS-10476
> URL: https://issues.apache.org/jira/browse/HDFS-10476
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10476-HDFS-1312.001.patch
>
>
> The plan command output is is placed in a default directory of 
> /system/diskbalancer instead it should be placed in 
> /system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-01 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10476:

Attachment: HDFS-10476-HDFS-1312.001.patch

> DiskBalancer: Plan command output directory should be a sub-directory
> -
>
> Key: HDFS-10476
> URL: https://issues.apache.org/jira/browse/HDFS-10476
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10476-HDFS-1312.001.patch
>
>
> The plan command output is is placed in a default directory of 
> /system/diskbalancer instead it should be placed in 
> /system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311205#comment-15311205
 ] 

Andrew Wang commented on HDFS-9924:
---

Based on what I've seen, people are actively trying to resolve 2.8 blockers and 
pushing things out to later releases. I'm trying to do the same for the first 
3.0 alpha. We're mainly blocked on HADOOP-12893, which (fingers crossed) is 
getting close.

I'm happy to do the git work if that's the main concern; I think it'll be 
fairly easy to move it out and back in later, since the new stuff is pretty 
separate.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311199#comment-15311199
 ] 

Jitendra Nath Pandey commented on HDFS-9924:


  What is the expected timeframe for 2.8 release? Hopefully, we will settle on 
the API by then. The code in trunk or branch-2 need not move out at all as 3.0 
and 2.9 releases are still far out. In the current shape, the code works and 
doesn't de-stabilize the branches and most of the work is complete. Therefore, 
we don't need to hurry to move it out and add overhead of merging it back.
Instead, we would try to expedite the convergence on the API. Based on 
discussion in HADOOP-12910, it does seem like there is a lot of demand for 
Future with callback. We should plan to add that, ideally in a way that works 
on both 3.x and 2.x. Reposting [~szetszwo]'s comment here:
bq. It seems that people really want Future with callback. I will think about 
how to do it... 

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10476) DiskBalancer: Plan command output directory should be a sub-directory

2016-06-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10476:
---

 Summary: DiskBalancer: Plan command output directory should be a 
sub-directory
 Key: HDFS-10476
 URL: https://issues.apache.org/jira/browse/HDFS-10476
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


The plan command output is is placed in a default directory of 
/system/diskbalancer instead it should be placed in 
/system/diskbalancer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311128#comment-15311128
 ] 

Andrew Wang commented on HDFS-9924:
---

I asked for this earlier, haven't seen any action yet: can we move all the 
patches involving user-facing APIs to a branch? We still haven't converged on 
the API, and I don't want this appearing in a release until that's settled.

I can do the git work if that's helpful, it looks like the new code is pretty 
separate.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover

2016-06-01 Thread Nicolas Fraison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Fraison updated HDFS-10220:
---
Status: Open  (was: Patch Available)

> A large number of expired leases can make namenode unresponsive and cause 
> failover
> --
>
> Key: HDFS-10220
> URL: https://issues.apache.org/jira/browse/HDFS-10220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
>Priority: Minor
> Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, 
> HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, 
> HADOOP-10220.006.patch, HADOOP-10220.007.patch, threaddump_zkfc.txt
>
>
> I have faced a namenode failover due to unresponsive namenode detected by the 
> zkfc with lot's of WARN messages (5 millions) like this one:
> _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All 
> existing blocks are COMPLETE, lease removed, file closed._
> On the threaddump taken by the zkfc there are lots of thread blocked due to a 
> lock.
> Looking at the code, there are a lock taken by the LeaseManager.Monitor when 
> some lease must be released. Due to the really big number of lease to be 
> released the namenode has taken too many times to release them blocking all 
> other tasks and making the zkfc thinking that the namenode was not 
> available/stuck.
> The idea of this patch is to limit the number of leased released each time we 
> check for lease so the lock won't be taken for a too long time period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover

2016-06-01 Thread Nicolas Fraison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Fraison updated HDFS-10220:
---
Status: Patch Available  (was: Open)

> A large number of expired leases can make namenode unresponsive and cause 
> failover
> --
>
> Key: HDFS-10220
> URL: https://issues.apache.org/jira/browse/HDFS-10220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
>Priority: Minor
> Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, 
> HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, 
> HADOOP-10220.006.patch, HADOOP-10220.007.patch, threaddump_zkfc.txt
>
>
> I have faced a namenode failover due to unresponsive namenode detected by the 
> zkfc with lot's of WARN messages (5 millions) like this one:
> _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All 
> existing blocks are COMPLETE, lease removed, file closed._
> On the threaddump taken by the zkfc there are lots of thread blocked due to a 
> lock.
> Looking at the code, there are a lock taken by the LeaseManager.Monitor when 
> some lease must be released. Due to the really big number of lease to be 
> released the namenode has taken too many times to release them blocking all 
> other tasks and making the zkfc thinking that the namenode was not 
> available/stuck.
> The idea of this patch is to limit the number of leased released each time we 
> check for lease so the lock won't be taken for a too long time period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10468:
-
Attachment: HDFS-10468.002.patch

Thanks for the review, [~iwasakims]. I agree there are still many places in 
DFSInputStream that have not correctly handle the interrupt. But considering 
the current complexity of the DFSInputStream code, I do not plan to fix all of 
them in this jira. To achieve that we may also need to do more code 
refactoring.  Maybe we can create an umbrella jira for this later.

I did another quick skim of the current DFSInputStream code and fixed several 
other places. Uploaded 002 patch.

> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, 
> HDFS-10468.002.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311033#comment-15311033
 ] 

Hadoop QA commented on HDFS-10458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 43s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807494/HDFSA-10458.02.patch |
| JIRA Issue | HDFS-10458 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77af916c487d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5870611 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15625/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15625/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10458.00.patch, HDFSA-10458.01.patch, 
> 

[jira] [Commented] (HDFS-10467) Router-based HDFS federation

2016-06-01 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310998#comment-15310998
 ] 

Inigo Goiri commented on HDFS-10467:


[~He Tianyi], our proposal requires additional components (State Store and 
Router) so it might be a little too complex for what you want.
Let me post a patch with our prototype during the week and if it sounds 
reasonable to you, you can decide whether to merge efforts.

> Router-based HDFS federation
> 
>
> Key: HDFS-10467
> URL: https://issues.apache.org/jira/browse/HDFS-10467
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Inigo Goiri
> Attachments: HDFS Router Federation.pdf
>
>
> Add a Router to provide a federated view of multiple HDFS clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10465) libhdfs++: Implement GetBlockLocations

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10465:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Landed at 43765269e7119caacc89e18e4baf46cc921643ca

> libhdfs++: Implement GetBlockLocations
> --
>
> Key: HDFS-10465
> URL: https://issues.apache.org/jira/browse/HDFS-10465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10465.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
Attachment: HDFSA-10458.02.patch

Oops missed a {{!}} in if condition.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10458.00.patch, HDFSA-10458.01.patch, 
> HDFSA-10458.02.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10475) Adding metrics and warn/debug logs for long FSD lock

2016-06-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10475:
-

 Summary: Adding metrics and warn/debug logs for long FSD lock
 Key: HDFS-10475
 URL: https://issues.apache.org/jira/browse/HDFS-10475
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This is a follow up of the comment on HADOOP-12916 and 
[here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
 add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
namenode similar to what we have for slow write/network WARN/metrics on 
datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310837#comment-15310837
 ] 

Xiaoyu Yao edited comment on HDFS-9924 at 6/1/16 6:31 PM:
--

[~daryn], thanks for the valuable feedback. [~kihwal] also mentioned similar 
issue 
[here|https://issues.apache.org/jira/browse/HADOOP-12916?focusedCommentId=15277342=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15277342].
 But I wasn't able to get clarification of it. The FSN/FSD locking issue is a 
very good point. I tried to find some metrics/logs about it but there was not 
any. I will open a separate ticket to add more metrics and WARN/DEBUG logs for 
long locking operations on namenode similar to what we have for slow 
write/network WARN/metrics on datanode.  

As you mentioned above, the priority level is assigned by scheduler. As part of 
HADOOP-12916, we separate scheduler from call queue and make it pluggable so 
that priority assignment can be customized as appropriate for different 
workloads. For the mixed write intensive and read workload example, I agree 
that the DecayedRpcScheduler that uses call rate to determine priority may not 
be the good choice. We have thought of adding a different scheduler that 
combines the weight of RPC call and its rate. But it is tricky to assign 
weight. For example,  getContentSummary on a directory with millions of 
files/dirs and a directory with a few files/dirs won't have the same impact on 
NN. 

Backoff based on response time allows all users to stop overloading namenode 
when the high priority RPC calls experience longer than normal end to end 
delay. User2/User3/User4 (low priority based on call rate) will have much wider 
response time threshold for backing off. In this case, User 1 will be backed 
off first by breaking the relative smaller response time threshold and get 
namenode out of the state that other users can not use the namenode "fairly". 

We are also proposing to have a scheduler that offers better namenode resource 
management via YARN integration on HADOOP-13128. I would appreciate if you can 
share your thoughts and comments on the proposal there as well. Thanks!



was (Author: xyao):
[~daryn], thanks for the valuable feedback. @Kihwal Lee also mentioned similar 
issue 
[here|https://issues.apache.org/jira/browse/HADOOP-12916?focusedCommentId=15277342=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15277342].
 But I wasn't able to get clarification of it. The FSN/FSD locking issue is a 
very good point. I tried to find some metrics/logs about it but there was not 
any. I will open a separate ticket to add more metrics and WARN/DEBUG logs for 
long locking operations on namenode similar to what we have for slow 
write/network WARN/metrics on datanode.  

As you mentioned above, the priority level is assigned by scheduler. As part of 
HADOOP-12916, we separate scheduler from call queue and make it pluggable so 
that priority assignment can be customized as appropriate for different 
workloads. For the mixed write intensive and read workload example, I agree 
that the DecayedRpcScheduler that uses call rate to determine priority may not 
be the good choice. We have thought of adding a different scheduler that 
combines the weight of RPC call and its rate. But it is tricky to assign 
weight. For example,  getContentSummary on a directory with millions of 
files/dirs and a directory with a few files/dirs won't have the same impact on 
NN. 

Backoff based on response time allows all users to stop overloading namenode 
when the high priority RPC calls experience longer than normal end to end 
delay. User2/User3/User4 (low priority based on call rate) will have much wider 
response time threshold for backing off. In this case, User 1 will be backed 
off first by breaking the relative smaller response time threshold and get 
namenode out of the state that other users can not use the namenode "fairly". 

We are also proposing to have a scheduler that offers better namenode resource 
management via YARN integration on HADOOP-13128. I would appreciate if you can 
share your thoughts and comments on the proposal there as well. Thanks!


> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to 

[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310837#comment-15310837
 ] 

Xiaoyu Yao commented on HDFS-9924:
--

[~daryn], thanks for the valuable feedback. @Kihwal Lee also mentioned similar 
issue 
[here|https://issues.apache.org/jira/browse/HADOOP-12916?focusedCommentId=15277342=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15277342].
 But I wasn't able to get clarification of it. The FSN/FSD locking issue is a 
very good point. I tried to find some metrics/logs about it but there was not 
any. I will open a separate ticket to add more metrics and WARN/DEBUG logs for 
long locking operations on namenode similar to what we have for slow 
write/network WARN/metrics on datanode.  

As you mentioned above, the priority level is assigned by scheduler. As part of 
HADOOP-12916, we separate scheduler from call queue and make it pluggable so 
that priority assignment can be customized as appropriate for different 
workloads. For the mixed write intensive and read workload example, I agree 
that the DecayedRpcScheduler that uses call rate to determine priority may not 
be the good choice. We have thought of adding a different scheduler that 
combines the weight of RPC call and its rate. But it is tricky to assign 
weight. For example,  getContentSummary on a directory with millions of 
files/dirs and a directory with a few files/dirs won't have the same impact on 
NN. 

Backoff based on response time allows all users to stop overloading namenode 
when the high priority RPC calls experience longer than normal end to end 
delay. User2/User3/User4 (low priority based on call rate) will have much wider 
response time threshold for backing off. In this case, User 1 will be backed 
off first by breaking the relative smaller response time threshold and get 
namenode out of the state that other users can not use the namenode "fairly". 

We are also proposing to have a scheduler that offers better namenode resource 
management via YARN integration on HADOOP-13128. I would appreciate if you can 
share your thoughts and comments on the proposal there as well. Thanks!


> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10367) TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.

2016-06-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310778#comment-15310778
 ] 

Brahma Reddy Battula commented on HDFS-10367:
-

[~iwasakims] I will raise randomport improvement tomorrow.hope this patch can 
be committed.

> TestDFSShell.testMoveWithTargetPortEmpty fails with Address bind exception.
> ---
>
> Key: HDFS-10367
> URL: https://issues.apache.org/jira/browse/HDFS-10367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10367-002.patch, HDFS-10367-003.patch, 
> HDFS-10367-004.patch, HDFS-10367-005.patch, HDFS-10367.patch
>
>
> {noformat}
> Problem binding to [localhost:9820] java.net.BindException: Address already 
> in use; For more details see:  http://wiki.apache.org/hadoop/BindException
> Stack Trace:
> java.net.BindException: Problem binding to [localhost:9820] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:444)
>   at sun.nio.ch.Net.bind(Net.java:436)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
>   at org.apache.hadoop.ipc.Server.(Server.java:2592)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:563)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:538)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:426)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:783)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:924)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:903)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
>   at 
> org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:567)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310768#comment-15310768
 ] 

Hadoop QA commented on HDFS-10464:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
28s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 25s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 25s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 26s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 26s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 63 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 49s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807476/HDFS-10464.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-10464 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 0440e9565360 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / f0ef898 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15624/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15624/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| javac | 

[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310753#comment-15310753
 ] 

Hadoop QA commented on HDFS-10458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 38s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.hdfs.server.namenode.TestNestedEncryptionZones |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807467/HDFSA-10458.01.patch |
| JIRA Issue | HDFS-10458 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c204cb6dc26 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5870611 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15623/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15623/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15623/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Attachment: HDFS-10464.HDFS-8707.000.patch

Patch 000 contributed by Anatoli Shein.

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10464) libhdfs++: Implement GetPathInfo and ListDirectory

2016-06-01 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10464:
--
Status: Patch Available  (was: Open)

> libhdfs++: Implement GetPathInfo and ListDirectory
> --
>
> Key: HDFS-10464
> URL: https://issues.apache.org/jira/browse/HDFS-10464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10464.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster

2016-06-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10458:
-
Attachment: HDFSA-10458.01.patch

Thanks [~shv] for the review. Attaching new patch to take the suggestion. I 
think it is a good idea to add the check in {{getEncryptionZoneForPath}}.

However, for uses in {{startFile}} etc., we should be more careful about race 
conditions. Therefore I added a boolean variable that can only be turned on 
once (can never be turned off). It is turned on *before* adding any encryption 
zone so it is safe to assume no EZ exists if the variable is false.

> getFileEncryptionInfo should return quickly for non-encrypted cluster
> -
>
> Key: HDFS-10458
> URL: https://issues.apache.org/jira/browse/HDFS-10458
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 2.6.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10458.00.patch, HDFSA-10458.01.patch
>
>
> {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks 
> if the path belongs to an EZ. For a busy system with potentially many listing 
> operations, this could cause locking contention.
> I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to 
> return whether the system has any EZ. If no EZ at all, 
> {{getFileEncryptionInfo}} should return null without {{readLock}}.
> If {{hasEncryptionZone}} is only used in the above scenario, maybe itself 
> doesn't need a {{readLock}} -- if the system doesn't have any EZ when 
> {{getFileEncryptionInfo}} is called on a path, it means the path cannot be 
> encrypted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HDFS-9353:


[~templedf]

Thanks for taking a look on it.
First of all a note:
I checked it again. This comment is no longer in JavaKeyStorePrivider class. 
The related method was replaced into ProviderUtils with HADOOP-13157.

Back to the base problem:
I checked the code and it seems pretty straightforward. If you think it is 
misleading I would remove it. I do not feel we should explain this with in-line 
comments. Or we can add more explanation in the javadoc comment instead of 
inline.

What do you think?

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-06-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310326#comment-15310326
 ] 

Daniel Templeton commented on HDFS-9353:


As a native English speaker, I respectfully disagree.  The comment is not just 
open for misinterpretation; it's misleading.  It took me several reads to 
figure out how it could possibly mean what it's supposed to mean, i.e. env 
first.  Comments are supposed to improve the readability of the code, not 
contradict it.  This comment should be corrected or deleted.

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9353) Code and comment mismatch in JavaKeyStoreProvider

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-9353.

Resolution: Not A Problem

I checked with my team. We agreed that the comment is ok and no change is 
needed.

> Code and comment mismatch in  JavaKeyStoreProvider 
> ---
>
> Key: HDFS-9353
> URL: https://issues.apache.org/jira/browse/HDFS-9353
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: nijel
>Assignee: Andras Bokor
>Priority: Trivial
>
> In
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.JavaKeyStoreProvider(URI 
> uri, Configuration conf) throws IOException
> The comment mentioned is
> {code}
> // Get the password file from the conf, if not present from the user's
> // environment var
> {code}
> But the code takes the value form ENV first
> I think this make sense since the user can pass the ENV for a particular run.
> My suggestion is to change the comment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310320#comment-15310320
 ] 

Hadoop QA commented on HDFS-10474:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 39s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 7 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 23s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| 

[jira] [Resolved] (HDFS-5059) Unnecessary permission denied error when creating/deleting snapshots with a non-existent directory

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HDFS-5059.

Resolution: Fixed

> Unnecessary permission denied error when creating/deleting snapshots with a 
> non-existent directory
> --
>
> Key: HDFS-5059
> URL: https://issues.apache.org/jira/browse/HDFS-5059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie
>
> As a non-superuser, when you create and delete a snapshot but accidentally 
> specify a non-existent directory to snapshot, you will see an 
> extra/unnecessary permission denied error right after the "No such file or 
> directory" error.
> {code}
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Permission denied
> [schu@hdfs-snapshots-vanilla ~]$ hdfs dfs -createSnapshot /user/schuf/ snap1
> createSnapshot: `/user/schuf/': No such file or directory
> createSnapshot: Permission denied
> {code}
> As the HDFS superuser, instead of the "Permission denied" error you'll get an 
> extra "Directory does not exist" error.
> {code}
> [root@hdfs-snapshots-vanilla ~]# hdfs dfs -deleteSnapshot /user/schuf/ snap1
> deleteSnapshot: `/user/schuf/': No such file or directory
> deleteSnapshot: Directory does not exist: /user/schuf
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-06-01 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HDFS-10425:
---

Assignee: Andras Bokor

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10465) libhdfs++: Implement GetBlockLocations

2016-06-01 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310270#comment-15310270
 ] 

James Clampffer commented on HDFS-10465:


Everything looks good to me.  The test infrastructure helper functions should 
be really useful elsewhere as well. +1


> libhdfs++: Implement GetBlockLocations
> --
>
> Key: HDFS-10465
> URL: https://issues.apache.org/jira/browse/HDFS-10465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10465.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5012) replica.getGenerationStamp() may be >= recoveryId

2016-06-01 Thread Christian Bartolomaeus (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310206#comment-15310206
 ] 

Christian Bartolomaeus commented on HDFS-5012:
--

Some additional information to the above stack trace: The error happened after 
a machine running a DataNode crashed and rebooted unexpectedly. The warning 
message was logged by that machine after the reboot when the DataNode process 
started. 

On three other DataNodes (those holding replicas of the block in question) the 
following error was logged:

{noformat}
PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
THIS IS NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 
1527175689, block=blk_2570851709037266390_1527175689, replica=FinalizedReplica, 
blk_2570851709037266390_1527175689, FINALIZED
  getNumBytes() = 48360562
  getBytesOnDisk()  = 48360562
  getVisibleLength()= 48360562
  getVolume()   = /var/lib/hdfs5/data/current
  getBlockFile()= 
/var/lib/hdfs5/data/current/BP-655596758-10.10.34.1-1341996058045/current/finalized/subdir38/subdir48/blk_2570851709037266390
  unlinked  =false
{noformat}

> replica.getGenerationStamp() may be >= recoveryId
> -
>
> Key: HDFS-5012
> URL: https://issues.apache.org/jira/browse/HDFS-5012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Ted Yu
> Attachments: testReplicationQueueFailover.txt
>
>
> The following was first observed by [~jdcryans] in 
> TestReplicationQueueFailover running against 2.0.5-alpha:
> {code}
> 2013-07-16 17:14:33,340 ERROR [IPC Server handler 7 on 35081] 
> security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user 
> (auth:SIMPLE) cause:java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> 2013-07-16 17:14:33,341 WARN  
> [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] 
> datanode.DataNode(1894): Failed to obtain replica info for block 
> (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from 
> datanode (=127.0.0.1:47006)
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310166#comment-15310166
 ] 

Hadoop QA commented on HDFS-10341:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 13s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 54s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807388/HDFS-10341.04.patch |
| JIRA Issue | HDFS-10341 |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux e2e78ac6526b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d749cf6 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15621/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15621/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  

[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Attachment: HDFS-10474-branch-2-002.patch

Changed to UTF-8 and uploading the patch.

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch, 
> HDFS-10474-branch-2-002.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-01 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15310025#comment-15310025
 ] 

Akira AJISAKA commented on HDFS-10341:
--

Thanks [~arpiagariu] and [~xiaobingo] for the comments. Updated the patch to 
address the comment.

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10341) Add a metric to expose the timeout number of pending replication blocks

2016-06-01 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-10341:
-
Attachment: HDFS-10341.04.patch

> Add a metric to expose the timeout number of pending replication blocks
> ---
>
> Key: HDFS-10341
> URL: https://issues.apache.org/jira/browse/HDFS-10341
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-10341.01.patch, HDFS-10341.02.patch, 
> HDFS-10341.03.patch, HDFS-10341.04.patch
>
>
> Per HDFS-6682, recording the timeout number of pending replication blocks is 
> useful to get the cluster health.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309957#comment-15309957
 ] 

Hadoop QA commented on HDFS-10474:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 10m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 27s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 36s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2 has 7 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 12s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 12s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 53s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 53s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 38s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 45s {color} 
| {color:red} hadoop-common in the 

[jira] [Commented] (HDFS-5012) replica.getGenerationStamp() may be >= recoveryId

2016-06-01 Thread Michael Tamm (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309860#comment-15309860
 ] 

Michael Tamm commented on HDFS-5012:


We had the same problem in our Hadoop cluster (not a test cluster, but a live 
cluster with real data, HDFS version: 2.0.0-cdh4.2.0):
{noformat}
Failed to obtain replica info for block 
(=BP-655596758-10.10.34.1-1341996058045:blk_2570851709037266390_1527175689) 
from datanode (=10.10.34.35:50010)


java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
replica.getGenerationStamp() >= recoveryId = 1527175689, 
block=blk_2570851709037266390_1527175689, replica=FinalizedReplica, 
blk_2570851709037266390_1527175689, FINALIZED
  getNumBytes() = 48360562
  getBytesOnDisk()  = 48360562
  getVisibleLength()= 48360562
  getVolume()   = /var/lib/hdfs2/data/current
  getBlockFile()= 
/var/lib/hdfs2/data/current/BP-655596758-10.10.34.1-1341996058045/current/finalized/subdir9/blk_2570851709037266390
  unlinked  =false
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1451)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1411)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:1920)
at 
org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
at 
org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2198)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:1933)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2000)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:214)
at org.apache.hadoop.hdfs.server.datanode.DataNode$2.run(DataNode.java:1905)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): THIS IS 
NOT SUPPOSED TO HAPPEN: replica.getGenerationStamp() >= recoveryId = 
1527175689, block=blk_2570851709037266390_1527175689, replica=FinalizedReplica, 
blk_2570851709037266390_1527175689, FINALIZED
  getNumBytes() = 48360562
  getBytesOnDisk()  = 48360562
  getVisibleLength()= 48360562
  getVolume()   = /var/lib/hdfs2/data/current
  getBlockFile()= 
/var/lib/hdfs2/data/current/BP-655596758-10.10.34.1-1341996058045/current/finalized/subdir9/blk_2570851709037266390
  unlinked  =false
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1451)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:1411)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:1920)
at 
org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55)
at 
org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:2198)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 

[jira] [Updated] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover

2016-06-01 Thread Nicolas Fraison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Fraison updated HDFS-10220:
---
Attachment: HADOOP-10220.007.patch

> A large number of expired leases can make namenode unresponsive and cause 
> failover
> --
>
> Key: HDFS-10220
> URL: https://issues.apache.org/jira/browse/HDFS-10220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
>Priority: Minor
> Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, 
> HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, 
> HADOOP-10220.006.patch, HADOOP-10220.007.patch, threaddump_zkfc.txt
>
>
> I have faced a namenode failover due to unresponsive namenode detected by the 
> zkfc with lot's of WARN messages (5 millions) like this one:
> _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All 
> existing blocks are COMPLETE, lease removed, file closed._
> On the threaddump taken by the zkfc there are lots of thread blocked due to a 
> lock.
> Looking at the code, there are a lock taken by the LeaseManager.Monitor when 
> some lease must be released. Due to the really big number of lease to be 
> released the namenode has taken too many times to release them blocking all 
> other tasks and making the zkfc thinking that the namenode was not 
> available/stuck.
> The idea of this patch is to limit the number of leased released each time we 
> check for lease so the lock won't be taken for a too long time period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309485#comment-15309485
 ] 

Brahma Reddy Battula commented on HDFS-10474:
-

Uploaded the patch .kindly Review..

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Status: Patch Available  (was: Open)

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Attachment: (was: HDFS-10474-001.patch)

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Attachment: HDFS-10474-branch-2-001.patch

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Attachment: HDFS-10474-001.patch

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-01 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10474:
---

 Summary: hftp copy fails when file name with Chinese+special char 
in branch-2
 Key: HDFS-10474
 URL: https://issues.apache.org/jira/browse/HDFS-10474
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


I have seen this , while using distcp

{noformat}
16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
attempt_1463564396851_0010_m_03_0, Status : FAILED
Error: java.io.IOException: File copy failed: hftp://*:25000/tmp/???@2X.png 
--> hdfs://hacluster/cxf7/tmp/节节高@2X.png
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
... 10 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309382#comment-15309382
 ] 

Masatake Iwasaki edited comment on HDFS-10468 at 6/1/16 6:43 AM:
-

Thanks for working on this, [~jingzhao].

The test worked for me but the expected exception still seems to be thrown 
after some retries in {{DFSInputStream#readWithStrategy}}.

In addition, there is another code path which swallows interrupted exception. 
For example, {{DFSInputStream#chooseDataNode}} catches InterruptedException on 
the {{sleep}} before retries.

{code}
  DFSClient.LOG.warn("DFS chooseDataNode: got # " + (failures + 1) + " 
IOException, will wait for " + waitTime + " msec.");
  Thread.sleep((long)waitTime);
} catch (InterruptedException ignored) {
}
{code}

We do not have way out here since {{java.lang.Thread#sleep}} clears interrupted 
status before throwing Interrupted Exception.



was (Author: iwasakims):
Thanks for working on this, [~jingzhao].

The test worked for me but the expected exception still seems to be thrown 
after some retries in {{DFSInputStream#readWithStrategy}}.

In addition, there is another code path which swallows interrupted exception. 
For example, {{DFSInputStream#readWithStrategy}} catches InterruptedException 
on the {{sleep}} before retries.

{code}
  DFSClient.LOG.warn("DFS chooseDataNode: got # " + (failures + 1) + " 
IOException, will wait for " + waitTime + " msec.");
  Thread.sleep((long)waitTime);
} catch (InterruptedException ignored) {
}
{code}

We do not have way out here since {{java.lang.Thread#sleep}} clears interrupted 
status before throwing Interrupted Exception.


> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309398#comment-15309398
 ] 

Masatake Iwasaki commented on HDFS-10468:
-

{code}
678   DFSClient.LOG.warn("The reading thread has been interrupted: 
{}.", ex);
{code}

nit: "{}" was not replaced since the second argument is Exception.

{noformat}
2016-06-01 14:10:29,476 [Thread-95] WARN  hdfs.DFSClient 
(DFSInputStream.java:blockSeekTo(678)) - The reading thread has been 
interrupted: {}.
java.nio.channels.ClosedByInterruptException
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
...
{noformat}


> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309392#comment-15309392
 ] 

Masatake Iwasaki commented on HDFS-10468:
-

bq. there still seems to be code paths which swallows interrupted state.

I ran modified {{TestRead#testInterruptReader}} against mini cluster with 
short-circuit local read enabled. The {{read}} with interrupted status 
succeeded after a retry when {{ShortCircuitCache.fetchOrCreate}} returns cached 
instance.


> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10468) HDFS read ends up ignoring an interrupt

2016-06-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309382#comment-15309382
 ] 

Masatake Iwasaki commented on HDFS-10468:
-

Thanks for working on this, [~jingzhao].

The test worked for me but the expected exception still seems to be thrown 
after some retries in {{DFSInputStream#readWithStrategy}}.

In addition, there is another code path which swallows interrupted exception. 
For example, {{DFSInputStream#readWithStrategy}} catches InterruptedException 
on the {{sleep}} before retries.

{code}
  DFSClient.LOG.warn("DFS chooseDataNode: got # " + (failures + 1) + " 
IOException, will wait for " + waitTime + " msec.");
  Thread.sleep((long)waitTime);
} catch (InterruptedException ignored) {
}
{code}

We do not have way out here since {{java.lang.Thread#sleep}} clears interrupted 
status before throwing Interrupted Exception.


> HDFS read ends up ignoring an interrupt
> ---
>
> Key: HDFS-10468
> URL: https://issues.apache.org/jira/browse/HDFS-10468
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Siddharth Seth
>Assignee: Jing Zhao
> Attachments: HDFS-10468.000.patch, HDFS-10468.001.patch, log
>
>
> If an interrupt comes in during an HDFS read - it looks like HDFS ends up 
> ignoring it (handling it), and retries the read after an interval.
> An interrupt should result in the read being cancelled, with an 
> InterruptedException being thrown.
> Similarly - if an HDFS op is started with the interrupt status on the thread 
> set, an InterruptedException should be thrown.
> cc [~jingzhao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-01 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-10473:
--

 Summary: Allow only suitable storage policies to be set on striped 
files
 Key: HDFS-10473
 URL: https://issues.apache.org/jira/browse/HDFS-10473
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


Currently existing storage policies are not suitable for striped layout files.
This JIRA proposes to reject setting storage policy on striped files.

Another thought is to allow only suitable storage polices like ALL_SSD.
Since the major use case of EC is for cold data, this may not be at high 
importance. So, I am ok to reject setting storage policy on striped files at 
this stage. Please suggest if others have some thoughts on this.

Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org