[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-14 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513373#comment-16513373
 ] 

Chao Sun commented on HDFS-12976:
-

Adopted [~shv]'s suggestion and attached patch v3. The 
{{ConfiguredFailoverProxyProvider}} is mostly the same as before but it does 
extra check for observer when doing {{performFailover}}. 

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-14 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-12976:

Attachment: HDFS-12976-HDFS-12943.003.patch

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13621) Upgrade common-langs version to 3.7 in hadoop-hdfs-project

2018-06-14 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513348#comment-16513348
 ] 

Takanobu Asanuma commented on HDFS-13621:
-

The failed tests succeeded in local and seem not to be related. Javac warnings 
are filed in HADOOP-15531. Kindly help to review it.

> Upgrade common-langs version to 3.7 in hadoop-hdfs-project
> --
>
> Key: HDFS-13621
> URL: https://issues.apache.org/jira/browse/HDFS-13621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13621.1.patch
>
>
> commons-lang 2.6 is widely used. Let's upgrade to 3.6.
> This jira is separated from HADOOP-10783.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-14 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513349#comment-16513349
 ] 

Chao Sun commented on HDFS-12976:
-

Yes that's the issue. However, if we keep the logic in 
{{ConfiguredFailoverProxyProvider}} the same, read requests may go to observer 
which may not be the desired behavior?

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Attachment: HDFS-13682.01.patch

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513312#comment-16513312
 ] 

Xiao Chen edited comment on HDFS-13682 at 6/15/18 4:58 AM:
---

Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change. Patch 1 tries 
to do this.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case. Can do this in a separate Jira if it sounds good to 
the audience.


was (Author: xiaochen):
Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case. Can do this in a separate Jira if it sounds good to 
the audience.

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Status: Patch Available  (was: Open)

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.01.patch, 
> HDFS-13682.dirty.repro.branch-2.patch, HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1651#comment-1651
 ] 

genericqa commented on HDDS-160:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 5s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
55s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
18s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-160 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927898/HDDS-160-HDDS-48.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c79cd6a7357 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 998e285 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/316/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/316/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 206 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/316/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor KeyManager, ChunkManager
> -
>
> 

[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513324#comment-16513324
 ] 

genericqa commented on HDFS-13310:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
17s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 89 new 
+ 843 unchanged - 1 fixed = 932 total (was 844) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}210m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  
org.apache.hadoop.hdfs.server.protocol.SyncTaskExecutionResult.getResult() may 
expose internal representation by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:by returning SyncTaskExecutionResult.result  At 
SyncTaskExecutionResult.java:[line 36] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration |
|   | hadoop.hdfs.server.namenode.TestNestedEncryptionZones |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   

[jira] [Commented] (HDFS-13186) [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513321#comment-16513321
 ] 

genericqa commented on HDFS-13186:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
45s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  8m 
42s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 38m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 38m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 38m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-common-project_hadoop-common generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
34s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
46s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}264m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513314#comment-16513314
 ] 

genericqa commented on HDFS-13609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
 2s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
30s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
21s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13609 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927913/HDFS-13609-HDFS-12943.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 60ad511fa3dd 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 292ccdc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24448/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24448/testReport/ |
| Max. process+thread count | 3591 (vs. ulimit of 1) |
| modules | C: 

[jira] [Comment Edited] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513312#comment-16513312
 ] 

Xiao Chen edited comment on HDFS-13682 at 6/15/18 4:08 AM:
---

Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case. Can do this in a separate Jira if it sounds good to 
the audience.


was (Author: xiaochen):
Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
\{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case.

 

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.dirty.repro.branch-2.patch, 
> HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513312#comment-16513312
 ] 

Xiao Chen commented on HDFS-13682:
--

Took an easier route and debugged branch-2. It turns out HADOOP-9747 does have 
some effects here - specifically at [this 
method|https://github.com/apache/hadoop/commit/59cf7588779145ad5850ad63426743dfe03d8347#diff-e6a2371b73365b7ba7ff9a266b9aa138L724].
 When this meets the KMSCP's morph-based-on-ugi logic, the ugi being used as 
actual changed from loginUgi to currentUgi. (Also has a weird HTTP 400 somehow, 
which is fixed if contentType is set).

Following this, I confirmed if we change {{KMSCP#getActualUgi}}'s check from 
{{actualUgi.hasKerberosCredentials()}} to {{!actualUgi.isFromKeytab() && 
!actualUgi.isFromTicket()}} (and making {{UGI#isFromTicket}} public of course), 
the test passes. This appears to be a more 'compatible' change.

IMO we should still consider explicitly doing the KMS call using the NN login 
ugi, this applies to both the {{getMetadata}} call during createEZ and the 
\{{generateEncryptedKey}} call from startFile. Reason being these calls are 
internal to the NN, and the hdfs rpc caller isn't expected to really interact 
with the KMS in this case.

 

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.dirty.repro.branch-2.patch, 
> HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-160:
---
Fix Version/s: 0.2.1

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Status: Patch Available  (was: In Progress)

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-155:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~hanishakoneru] for review.

I have committed this to HDDS-48 branch.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch, 
> HDDS-155-HDDS-48.08.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513300#comment-16513300
 ] 

Bharat Viswanadham commented on HDDS-155:
-

I will commit this shortly.

shaded-client issue is not caused by this patch.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch, 
> HDDS-155-HDDS-48.08.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513300#comment-16513300
 ] 

Bharat Viswanadham edited comment on HDDS-155 at 6/15/18 3:57 AM:
--

I will commit this shortly.

shaded-client issue is not caused by this patch.


was (Author: bharatviswa):
I will commit this shortly.

shaded-client issue is not caused by this patch.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch, 
> HDDS-155-HDDS-48.08.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513299#comment-16513299
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
22s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
18s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927921/HDDS-155-HDDS-48.08.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux becb8bb8528c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 9a5552b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513289#comment-16513289
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
55s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
19s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927921/HDDS-155-HDDS-48.08.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux de6100c4c3a7 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 9a5552b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Attachment: HDFS-13682.dirty.repro.branch-2.patch

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.dirty.repro.branch-2.patch, 
> HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513270#comment-16513270
 ] 

Zuoming Zhang commented on HDFS-13676:
--

[~elgoiri] It doesn't seem like these errors are related to my change, just 
double check with it.

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513259#comment-16513259
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 9s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
25s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
18s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Possible null pointer dereference of containerMetaDataPath in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:containerMetaDataPath in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:[line 173] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-155 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513261#comment-16513261
 ] 

genericqa commented on HDFS-13676:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
6s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13676 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927902/HDFS-13676.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux feeba5bf96fb 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 020dd61 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24446/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24446/testReport/ |
| Max. process+thread count | 3376 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-155:

Attachment: HDDS-155-HDDS-48.08.patch

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch, 
> HDDS-155-HDDS-48.08.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513247#comment-16513247
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 1s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
18s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Possible null pointer dereference of containerMetaDataPath in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:containerMetaDataPath in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:[line 173] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-155 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13609:
---
Attachment: HDFS-13609-HDFS-12943.002.patch

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513226#comment-16513226
 ] 

Erik Krogen commented on HDFS-13609:


Noticed two issues with {{TestQuorumJournalManager}}. Uploaded v002 addressing 
these.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513223#comment-16513223
 ] 

Bharat Viswanadham commented on HDDS-155:
-

Attached v07 patch to fix jenkins reported issues.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-155:

Attachment: HDDS-155-HDDS-48.07.patch

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch, HDDS-155-HDDS-48.07.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13609:
---
Status: Patch Available  (was: In Progress)

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513219#comment-16513219
 ] 

Erik Krogen commented on HDFS-13609:


Just attached v001 patch applied on top of changes in HDFS-13607 / HDFS-13608. 
Also did some cleanup from v000 patch, and removed Java 8 functionality 
(lambdas / stream). Should be ready for review.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13609:
---
Attachment: HDFS-13609-HDFS-12943.001.patch

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-14 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513218#comment-16513218
 ] 

Konstantin Shvachko commented on HDFS-12976:


Hey [~csun] looked at the test failures. I think the problem is that you added 
a call {{proto.getServiceStatus()}} to the 
{{ConfiguredFailoverProxyProvider.initProxies()}}. It fails with old tests, 
which use fake addresses, like {{"machine1.foo.bar:8020"}} while instantiating 
the proxy.
I think the best way is to keep {{ConfiguredFailoverProxyProvider}} logic 
unchanged, and override {{initProxies()}} in {{ObserverReadProxyProvider}} to 
do the initial filtering.

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-14 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513206#comment-16513206
 ] 

BELUGA BEHR commented on HDFS-13448:


[~daryn] I volunteer to give the best I have with the time I have.  And yes, if 
I have time, I can cycle back, but it looks like someone with a bit more 
experience should be looking at this test suite from a more holistic approach.  
I certainly don't have the background to redesign this entire test suite.  
Besides, it's not crazy to think that "someone else" could volunteer to 
re-write these types of things:  [HIVE-19846]

I hope you'll consider the latest patch for acceptance.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513201#comment-16513201
 ] 

Íñigo Goiri commented on HDFS-13676:


Thanks for [^HDFS-13676.001.patch] and  [^HDFS-13676-branch-2.000.patch].
When was the change form name-0 to name-0-1 done?
In other words which one applies to which branch?

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513188#comment-16513188
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
23s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
29s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
22s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-hdds/container-service generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 4 
unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  containerMetaDataPath is null guaranteed to be dereferenced in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:be dereferenced in 
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(VolumeSet, 
VolumeChoosingPolicy, String) on exception path  Dereferenced at 
KeyValueContainer.java:[line 161] |
|  |  Exceptional return value of java.io.File.delete() ignored in 

[jira] [Updated] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13608:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

> [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC
> -
>
> Key: HDFS-13608
> URL: https://issues.apache.org/jira/browse/HDFS-13608
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13608-HDFS-12943.000.patch, 
> HDFS-13608-HDFS-12943.001.patch, HDFS-13608-HDFS-12943.002.patch, 
> HDFS-13608-HDFS-12943.003.patch, HDFS-13608-HDFS-12943.004.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to make the JournalNode-side changes necessary to support 
> serving edits via RPC. This includes interacting with the cache added in 
> HDFS-13607.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13608) [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC

2018-06-14 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513186#comment-16513186
 ] 

Erik Krogen commented on HDFS-13608:


I just committed this to branch HDFS-12943. Thanks for the reviews 
[~vagarychen] and [~shv]! Now for part 3 :) HDFS-13609

> [Edit Tail Fast Path Pt 2] Add ability for JournalNode to serve edits via RPC
> -
>
> Key: HDFS-13608
> URL: https://issues.apache.org/jira/browse/HDFS-13608
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13608-HDFS-12943.000.patch, 
> HDFS-13608-HDFS-12943.001.patch, HDFS-13608-HDFS-12943.002.patch, 
> HDFS-13608-HDFS-12943.003.patch, HDFS-13608-HDFS-12943.004.patch
>
>
> See HDFS-13150 for full design.
> This JIRA is to make the JournalNode-side changes necessary to support 
> serving edits via RPC. This includes interacting with the cache added in 
> HDFS-13607.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-06-14 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513173#comment-16513173
 ] 

Ewan Higgs edited comment on HDFS-13310 at 6/15/18 12:32 AM:
-

PUT_FILE adds extra complication here. When writing a file, if a DN splits but 
is still writing to the remote storage then it could interfere with another DN 
that is tasked with writing the file. This should be solved by adding a 
`complete` phase to the PUT_FILE. At this point, there's very little difference 
between PUT_FILE and MULTIPART_PUT_PART. With this in mind, consider removing 
PUT_PART.


was (Author: ehiggs):
Feedback from [~chris.douglas]:

PUT_FILE adds extra complication here. When writing a file, if a DN splits but 
is still writing to the remote storage then it could interfere with another DN 
that is tasked with writing the file. This should be solved by adding a 
`complete` phase to the PUT_FILE. At this point, there's very little difference 
between PUT_FILE and MULTIPART_PUT_PART. With this in mind, consider removing 
PUT_PART.

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-06-14 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513176#comment-16513176
 ] 

Ewan Higgs commented on HDFS-13310:
---

The protocol should also have the target storage uuid so that the datanode 
knows which FsVolumeImpl (or ProvidedVolumeImpl, rather) should be updated with 
the new replica information.

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Description: 
This Jira is used to add VolumeIO stats in the datanode.

Add IO calculations for Chunk operations.

readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.

 

  was:
This Jira is used to add VolumeIO stats in the datanode.

Add IO calculations for Chunk operations.

readBytes, readOpCount, writeBytes, writeOpCount.


> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount, readTime, writeTime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks

2018-06-14 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513173#comment-16513173
 ] 

Ewan Higgs commented on HDFS-13310:
---

Feedback from [~chris.douglas]:

PUT_FILE adds extra complication here. When writing a file, if a DN splits but 
is still writing to the remote storage then it could interfere with another DN 
that is tasked with writing the file. This should be solved by adding a 
`complete` phase to the PUT_FILE. At this point, there's very little difference 
between PUT_FILE and MULTIPART_PUT_PART. With this in mind, consider removing 
PUT_PART.

> [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup 
> blocks
> --
>
> Key: HDFS-13310
> URL: https://issues.apache.org/jira/browse/HDFS-13310
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13310-HDFS-12090.001.patch, 
> HDFS-13310-HDFS-12090.002.patch
>
>
> As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands 
> in the heartbeat response that instructs it to backup a block.
> This should take the form of two sub commands: PUT_FILE (when the file is <=1 
> block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see 
> HDFS-13186).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Attachment: HDFS-13676.001.patch

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> HDFS-13676.001.patch, TestEditLogRace-Report-branch-2.001.txt, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Affects Version/s: 3.1.0

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report-branch-2.001.txt, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Target Version/s: 2.9.1, 3.1.0  (was: 2.9.1)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report-branch-2.001.txt, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Fix Version/s: (was: 2.9.1)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report-branch-2.001.txt, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13186) [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations

2018-06-14 Thread Chris Douglas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-13186:
-
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations
> -
>
> Key: HDFS-13186
> URL: https://issues.apache.org/jira/browse/HDFS-13186
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, 
> HDFS-13186.003.patch, HDFS-13186.004.patch, HDFS-13186.005.patch, 
> HDFS-13186.006.patch, HDFS-13186.007.patch, HDFS-13186.008.patch, 
> HDFS-13186.009.patch
>
>
> To write files in parallel to an external storage system as in HDFS-12090, 
> there are two approaches:
>  # Naive approach: use a single datanode per file that copies blocks locally 
> as it streams data to the external service. This requires a copy for each 
> block inside the HDFS system and then a copy for the block to be sent to the 
> external system.
>  # Better approach: Single point (e.g. Namenode or SPS style external client) 
> and Datanodes coordinate in a multipart - multinode upload.
> This system needs to work with multiple back ends and needs to coordinate 
> across the network. So we propose an API that resembles the following:
> {code:java}
> public UploadHandle multipartInit(Path filePath) throws IOException;
> public PartHandle multipartPutPart(InputStream inputStream,
> int partNumber, UploadHandle uploadId) throws IOException;
> public void multipartComplete(Path filePath,
> List> handles, 
> UploadHandle multipartUploadId) throws IOException;{code}
> Here, UploadHandle and PartHandle are opaque handlers in the vein of 
> PathHandle so they can be serialized and deserialized in hadoop-hdfs project 
> without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle 
> and PartHandle.
> In an object store such as S3A, the implementation is straight forward. In 
> the case of writing multipart/multinode to HDFS, we can write each block as a 
> file part. The complete call will perform a concat on the blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-169 started by Bharat Viswanadham.
---
> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-169:
---

Assignee: Bharat Viswanadham

> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-169:

Description: 
This Jira is used to add VolumeIO stats in the datanode.

Add IO calculations for Chunk operations.

readBytes, readOpCount, writeBytes, writeOpCount.

  was:
This Jira is used to add VolumeIO stats in the datanode.

During writeChunk, readChunk, deleteChunk add IO calculations for each 
operation like 

readBytes, readOpCount, writeBytes, writeOpCount.


> Add Volume IO Stats 
> 
>
> Key: HDDS-169
> URL: https://issues.apache.org/jira/browse/HDDS-169
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> This Jira is used to add VolumeIO stats in the datanode.
> Add IO calculations for Chunk operations.
> readBytes, readOpCount, writeBytes, writeOpCount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513161#comment-16513161
 ] 

Hanisha Koneru commented on HDDS-155:
-

Thanks [~bharatviswa].

LGTM. +1 pending Jenkins.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-169) Add Volume IO Stats

2018-06-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-169:
---

 Summary: Add Volume IO Stats 
 Key: HDDS-169
 URL: https://issues.apache.org/jira/browse/HDDS-169
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


This Jira is used to add VolumeIO stats in the datanode.

During writeChunk, readChunk, deleteChunk add IO calculations for each 
operation like 

readBytes, readOpCount, writeBytes, writeOpCount.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Attachment: TestEditLogRace-Report-branch-2.001.txt

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.9.1
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report-branch-2.001.txt, TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Fix Version/s: (was: 3.1.0)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.9.1
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Target Version/s: 2.9.1  (was: 3.1.0, 2.9.1)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 2.9.1
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13676) TestEditLogRace fails on Windows

2018-06-14 Thread Zuoming Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zuoming Zhang updated HDFS-13676:
-
Affects Version/s: (was: 3.1.0)

> TestEditLogRace fails on Windows
> 
>
> Key: HDFS-13676
> URL: https://issues.apache.org/jira/browse/HDFS-13676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.1
>Reporter: Zuoming Zhang
>Assignee: Zuoming Zhang
>Priority: Minor
>  Labels: Windows
> Fix For: 3.1.0, 2.9.1
>
> Attachments: HDFS-13676-branch-2.000.patch, HDFS-13676.000.patch, 
> TestEditLogRace-Report.000.txt
>
>
> _TestEditLogRace_ fails on Windows
>  
> Problem:
> When try to call _FSImage.saveFSImageInAllDirs_, there's actually no 
> directories existing. This is because the _getConf()_ function doesn't 
> specify creating any directories.
>  
> Fix:
> Remove the comment for the two lines that config directories to be created.
>  
> Concern:
> Not for sure why it was commented in change 
> [https://github.com/apache/hadoop/commit/3cb7ae11a839c01b8be629774874c1873f51b747]
>  And it should also fail for Linux I guess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Attachment: HDDS-160-HDDS-48.01.patch

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Attachment: (was: HDDS-160-HDDS-48.01.patch)

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Description: 
This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
and Chunk related operations.
 # Changes to current existing Keymanager and ChunkManager are:
 ## Removal of usage of ContainerManager.
 ## Passing container to method calls.
 ## Using layOutversion during reading/deleting chunk files.

 

 

  was:
This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
and Chunk related operations.
 # Changes to current existing Keymanager and ChunkManager are:
 ## Removal of usage of ContainerManager.
 ## Passing container to method calls.
 ## Using layOutversion during reading/deleting chunk files.

Add a new Class KeyValueManager to implement ContainerManager.

 


> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513133#comment-16513133
 ] 

Bharat Viswanadham edited comment on HDDS-155 at 6/14/18 11:32 PM:
---

Hi [~hanishakoneru]

Thanks for the review.

Addressed review comments and Jenkins reported issues in patch v06.


was (Author: bharatviswa):
Hi [~hanishakoneru]

Thanks for the review.

Addressed review comments in patch v06.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513133#comment-16513133
 ] 

Bharat Viswanadham commented on HDDS-155:
-

Hi [~hanishakoneru]

Thanks for the review.

Addressed review comments in patch v06.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-155:

Attachment: HDDS-155-HDDS-48.06.patch

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch, 
> HDDS-155-HDDS-48.06.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-146) Refactor the structure of the acceptance tests

2018-06-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513131#comment-16513131
 ] 

Hudson commented on HDDS-146:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14432 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14432/])
HDDS-146. Refactor the structure of the acceptance tests. Contributed by 
(aengineer: rev 020dd61988b1d47971e328174135d54baf5d41aa)
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/basic/.env
* (edit) start-build-env.sh
* (delete) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-shell.robot
* (delete) 
hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone.robot
* (edit) hadoop-ozone/acceptance-test/pom.xml
* (delete) hadoop-ozone/acceptance-test/src/test/compose/docker-compose.yaml
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/basic/docker-config
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/.env
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/basic/basic.robot
* (add) hadoop-ozone/acceptance-test/src/test/acceptance/commonlib.robot
* (add) hadoop-ozone/acceptance-test/dev-support/docker/Dockerfile
* (delete) hadoop-ozone/acceptance-test/src/test/compose/.env
* (add) 
hadoop-ozone/acceptance-test/src/test/acceptance/basic/docker-compose.yaml
* (add) hadoop-ozone/acceptance-test/dev-support/bin/robot-dnd-all.sh
* (add) hadoop-ozone/acceptance-test/dev-support/docker/docker-compose.yaml
* (edit) hadoop-ozone/acceptance-test/dev-support/bin/robot-all.sh
* (delete) hadoop-ozone/acceptance-test/src/test/compose/docker-config
* (edit) hadoop-ozone/acceptance-test/dev-support/bin/robot.sh


> Refactor the structure of the acceptance tests
> --
>
> Key: HDDS-146
> URL: https://issues.apache.org/jira/browse/HDDS-146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-146.001.patch, HDDS-146.002.patch, 
> HDDS-146.004.patch, HDDS-146.005.patch
>
>
> I started to fix the packaging the ozone file system which requires 
> additional acceptance tests.
> The original acceptance test is added when it was only one single file but 
> (fortunately) now we have multiple files and multiple tests and ozonefs 
> requires even more.
> To make it easier to handle multplie acceptance tests I propose some changes 
> on the projects structure. This patch includes the following changes:
>  # all the start/stop/check common keywords are moved out the a common 
> library file (commonlib.robot). All the existing files are simplified.
>  # ozone-shell tests are simplified with using parametrized tests. We don't 
> need to repeat the same steps multiple times.
>  # The directories in the projects are simplified. Both compose file and 
> robot files are in the same directory. The basedir is handled from the robot 
> files. Now it's easier to run the tests localy (go to the dir and do a simple 
> call 'robot basic.robot') or start the containers (use docker-compose from 
> the base directory)
>  # I adjusted the logging (warning about the missing native library is not 
> required)
>  # Decreased the heartbeat intervall (to make the tests faster)
>  # I improved the ozone-shell tests with adding a few additional checks.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13680) Httpfs does not support custom authentication

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513112#comment-16513112
 ] 

genericqa commented on HDFS-13680:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
52s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
12s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
47s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13680 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927881/HDFS-13680.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 573f490d096d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d7449d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24443/testReport/ |
| Max. process+thread count | 633 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24443/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Httpfs does not support custom authentication
> 

[jira] [Updated] (HDDS-146) Refactor the structure of the acceptance tests

2018-06-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-146:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ljain], [~nandakumar131] Thanks for testing and reviews. [~elek] Thanks for 
the contribution. I have committed this patch to trunk.

> Refactor the structure of the acceptance tests
> --
>
> Key: HDDS-146
> URL: https://issues.apache.org/jira/browse/HDDS-146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-146.001.patch, HDDS-146.002.patch, 
> HDDS-146.004.patch, HDDS-146.005.patch
>
>
> I started to fix the packaging the ozone file system which requires 
> additional acceptance tests.
> The original acceptance test is added when it was only one single file but 
> (fortunately) now we have multiple files and multiple tests and ozonefs 
> requires even more.
> To make it easier to handle multplie acceptance tests I propose some changes 
> on the projects structure. This patch includes the following changes:
>  # all the start/stop/check common keywords are moved out the a common 
> library file (commonlib.robot). All the existing files are simplified.
>  # ozone-shell tests are simplified with using parametrized tests. We don't 
> need to repeat the same steps multiple times.
>  # The directories in the projects are simplified. Both compose file and 
> robot files are in the same directory. The basedir is handled from the robot 
> files. Now it's easier to run the tests localy (go to the dir and do a simple 
> call 'robot basic.robot') or start the containers (use docker-compose from 
> the base directory)
>  # I adjusted the logging (warning about the missing native library is not 
> required)
>  # Decreased the heartbeat intervall (to make the tests faster)
>  # I improved the ozone-shell tests with adding a few additional checks.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513099#comment-16513099
 ] 

genericqa commented on HDFS-11520:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 48m 
41s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
19s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
58s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-11520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927875/HDFS-11520.003.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 6440fdb652bc 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d7449d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24442/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24442/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian 

[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-06-14 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513084#comment-16513084
 ] 

Daryn Sharp commented on HDFS-13448:


{quote}They are generated by using existing test patterns on this new feature. 
Cleanup should be done in a separate ticket to remove the deprecated calls 
across the board.
{quote}
That's not exactly how deprecation works. :)  Deprecation is "please don't use 
this anymore", not "please keeping using this until nobody else uses it".  
Let's consider the proposed future cleanup.  Are you volunteering to do it?  If 
no, is it fair to make someone else re-write your tests?

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch, HDFS-13448.4.patch, HDFS-13448.5.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch, HDFS-13448.9.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513083#comment-16513083
 ] 

Xiao Chen commented on HDFS-13682:
--

Updated a patch that reproduces this. One potential solution is to call the KMS 
as the login user, because all these are hdfs superuser-only ops. Uncommenting 
the changes in FSDirEncryptionZoneOp would pass the test. I propose in this 
jira, we do this one for createZone.

This a passing in CDH5, and failing in CDH6. I automatically suspected 
HADOOP-9747, but cannot blame on it for anything. :)
One difference I noticed is that, In CDH5 we don't have [these lines in 
KerberosAuthenticator|https://github.com/apache/hadoop/blob/branch-3.0.0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java#L272-L273],
 which is added by HADOOP-11332. Not sure what's the correct solution here 
regarding that, but if we do this as the login user, the check should pass and 
no new subject need to be created.

[~daryn], may I ask for your thoughts here? Thanks for the time.

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13682:


 Summary: Cannot create encryption zone after KMS auth token expires
 Key: HDFS-13682
 URL: https://issues.apache.org/jira/browse/HDFS-13682
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption, namenode
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Xiao Chen
 Attachments: HDFS-13682.dirty.repro.patch

Our internal testing reported this behavior recently.
{noformat}
[root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt /cdep/keytabs/hdfs.keytab 
hdfs -l 30d -r 30d
[root@nightly6x-1 ~]# sudo -u hdfs klist
Ticket cache: FILE:/tmp/krb5cc_994
Default principal: h...@gce.cloudera.com

Valid starting   Expires  Service principal
06/12/2018 03:24:09  07/12/2018 03:24:09  
krbtgt/gce.cloudera@gce.cloudera.com
[root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 -path 
/user/systest/ez
RemoteException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
{noformat}

Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
cannot authenticate with the server after the authentication token (which is 
cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13682) Cannot create encryption zone after KMS auth token expires

2018-06-14 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13682:
-
Attachment: HDFS-13682.dirty.repro.patch

> Cannot create encryption zone after KMS auth token expires
> --
>
> Key: HDFS-13682
> URL: https://issues.apache.org/jira/browse/HDFS-13682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, namenode
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HDFS-13682.dirty.repro.patch
>
>
> Our internal testing reported this behavior recently.
> {noformat}
> [root@nightly6x-1 ~]# sudo -u hdfs /usr/bin/kinit -kt 
> /cdep/keytabs/hdfs.keytab hdfs -l 30d -r 30d
> [root@nightly6x-1 ~]# sudo -u hdfs klist
> Ticket cache: FILE:/tmp/krb5cc_994
> Default principal: h...@gce.cloudera.com
> Valid starting   Expires  Service principal
> 06/12/2018 03:24:09  07/12/2018 03:24:09  
> krbtgt/gce.cloudera@gce.cloudera.com
> [root@nightly6x-1 ~]# sudo -u hdfs hdfs crypto -createZone -keyName key77 
> -path /user/systest/ez
> RemoteException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> {noformat}
> Upon further investigation, it's due to the KMS client (cached in HDFS NN) 
> cannot authenticate with the server after the authentication token (which is 
> cached by KMSCP) expires, even if the HDFS client RPC has valid kerberos 
> credentials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13473) DataNode update BlockKeys using mode PULL rather than PUSH from NameNode

2018-06-14 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513074#comment-16513074
 ] 

Daryn Sharp commented on HDFS-13473:


Upon quick & cursory review, it generally looks good.
# Doesn't look like the NN will send keys to older DNs?  Need to make sure the 
NN sends the keys if the DN isn't sending the version.
# Why is the {{BPServiceActor}} swallowing {{IllegalArgumentException}}?

> DataNode update BlockKeys using mode PULL rather than PUSH from NameNode
> 
>
> Key: HDFS-13473
> URL: https://issues.apache.org/jira/browse/HDFS-13473
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-13473-trunk.001.patch, HDFS-13473-trunk.002.patch
>
>
> It is passive behavior about updating Block keys for DataNode currently, and 
> it depends on if NameNode return #KeyUpdateCommand for heartbeat response.
> There are several problems of this Block keys synchronization mode:
> a. NameNode can't be sensed about if Block Keys reach DataNode successfully,
> b. It is also not sensed for DataNode who meets some exception while receive 
> or process heartbeat response which include BlockKeyCommand,
> such as HDFS-13441 and HDFS-12749 mentioned.
> So I propose improve Push Block Keys from NameNode for DataNode to DataNode 
> Pull Block Keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13680) Httpfs does not support custom authentication

2018-06-14 Thread Joris Nogneng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joris Nogneng updated HDFS-13680:
-
Status: Patch Available  (was: Open)

-01: Override setAuthHandlerClass method in HttpFSAuthenticationFilter.java, to 
allow Httpfs to use any custom Auth Handler Clas by setting the property 
"httpfs.authentication.type".

> Httpfs does not support custom authentication
> -
>
> Key: HDFS-13680
> URL: https://issues.apache.org/jira/browse/HDFS-13680
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Joris Nogneng
>Priority: Major
> Attachments: HDFS-13680.01.patch
>
>
> Currently Httpfs Authentication Filter does not support any custom 
> authentication: the Authentication Handler can only be 
> PseudoAuthenticationHandler or KerberosDelegationTokenAuthenticationHandler.
> We should allow other authentication handlers to manage custom authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13680) Httpfs does not support custom authentication

2018-06-14 Thread Joris Nogneng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joris Nogneng updated HDFS-13680:
-
Attachment: HDFS-13680.01.patch

> Httpfs does not support custom authentication
> -
>
> Key: HDFS-13680
> URL: https://issues.apache.org/jira/browse/HDFS-13680
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Joris Nogneng
>Priority: Major
> Attachments: HDFS-13680.01.patch
>
>
> Currently Httpfs Authentication Filter does not support any custom 
> authentication: the Authentication Handler can only be 
> PseudoAuthenticationHandler or KerberosDelegationTokenAuthenticationHandler.
> We should allow other authentication handlers to manage custom authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513041#comment-16513041
 ] 

Hanisha Koneru edited comment on HDDS-155 at 6/14/18 9:59 PM:
--

Thanks for the update [~bharatviswa].

Some comments (Many of them are just NITs.):
 # Can we rename the following in {{DatanodeContainerProtocol.proto}}
 CONTAINER_REQUIRED_FILES_CREATE_ERROR -> CONTAINER_FILES_CREATE_ERROR
 CONTAINER_CHECKSUM_FILE_CALCULATE_ERROR -> CONTAINER_CHECKSUM_ERROR
 # {{KeyValueContainer}} Line 131, 133 : Instead of 
"String.valueOf(containerId)", we can reuse containerName
 # {{KeyValueContainer#updateRequiredFiles()}} -> when update succeeds, the 
backup files must be deleted.
 # In all scenarios where the createContainer fails, we should delete the 
containerBase dir.
 # Can we rename {{createRequiredFiles}} and {{updateRequiredFiles}} to 
createContainerFile and updateContainerFile? Required doesn’t tell us instantly 
what the files are. I think it is ok not to mention that the checksum file is 
also created as part of createContainerFile.
 # Line 225 : "Unable to delete container checksum backup file” -> Since we are 
deleting the temporary file, we can have "Unable to delete container temporary 
checksum file”.
 # {{KeyValueContainer#updateRequiredFiles()}}, when throwing 
INVALID_CONTAINER_STATE or when the restore fails, we should set the state of 
the container to INVALID.
 # When restoring from back up files, can we add a info log message saying 
"update failed, so restoring the container files."
 # Javadoc of computeChecksum is misleading. "Create checksum file of the 
.container file.” -> “Computer checksum of the .container file”.
 # Can we have a file handle to {{.db file}} in {{KeyValueContainerData}} as 
this would be used by every key operation.
 # {{KeyValueContainerUtil#verifyIsNewContainer()}} -> existence of 
containerBasePath should be sufficient condition to fail createContainer.


was (Author: hanishakoneru):
Thanks for the update [~bharatviswa].

Some comments (Many of them are just NITs.):
 # Can we rename the following in {{DatanodeContainerProtocol.proto}}
 CONTAINER_REQUIRED_FILES_CREATE_ERROR -> CONTAINER_FILES_CREATE_ERROR
 CONTAINER_CHECKSUM_FILE_CALCULATE_ERROR -> CONTAINER_CHECKSUM_ERROR
 # {{KeyValueContainer}} Line 131, 133 : Instead of 
"String.valueOf(containerId)", we can reuse containerName
 # {{KeyValueContainer#updateRequiredFiles()}} -> when update succeeds, the 
backup files must be deleted. In all scenarios where the createContainer fails, 
we should delete the containerBase dir.
 # Can we rename {{createRequiredFiles}} and {{updateRequiredFiles}} to 
createContainerFile and updateContainerFile? Required doesn’t tell us instantly 
what the files are. I think it is ok not to mention that the checksum file is 
also created as part of createContainerFile.
 # Line 225 : "Unable to delete container checksum backup file” -> Since we are 
deleting the temporary file, we can have "Unable to delete container temporary 
checksum file”.
 # {{KeyValueContainer#updateRequiredFiles()}}, when throwing 
INVALID_CONTAINER_STATE or when the restore fails, we should set the state of 
the container to INVALID.
 # When restoring from back up files, can we add a info log message saying 
"update failed, so restoring the container files."
 # Javadoc of computeChecksum is misleading. "Create checksum file of the 
.container file.” -> “Computer checksum of the .container file”.
 # Can we have a file handle to {{.db file}} in {{KeyValueContainerData}} as 
this would be used by every key operation.
 # {{KeyValueContainerUtil#verifyIsNewContainer()}} -> existence of 
containerBasePath should be sufficient condition to fail createContainer.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513041#comment-16513041
 ] 

Hanisha Koneru edited comment on HDDS-155 at 6/14/18 9:59 PM:
--

Thanks for the update [~bharatviswa].

Some comments (Many of them are just NITs.):
 # Can we rename the following in {{DatanodeContainerProtocol.proto}}
 CONTAINER_REQUIRED_FILES_CREATE_ERROR -> CONTAINER_FILES_CREATE_ERROR
 CONTAINER_CHECKSUM_FILE_CALCULATE_ERROR -> CONTAINER_CHECKSUM_ERROR
 # {{KeyValueContainer}} Line 131, 133 : Instead of 
"String.valueOf(containerId)", we can reuse containerName
 # {{KeyValueContainer#updateRequiredFiles()}} -> when update succeeds, the 
backup files must be deleted. In all scenarios where the createContainer fails, 
we should delete the containerBase dir.
 # Can we rename {{createRequiredFiles}} and {{updateRequiredFiles}} to 
createContainerFile and updateContainerFile? Required doesn’t tell us instantly 
what the files are. I think it is ok not to mention that the checksum file is 
also created as part of createContainerFile.
 # Line 225 : "Unable to delete container checksum backup file” -> Since we are 
deleting the temporary file, we can have "Unable to delete container temporary 
checksum file”.
 # {{KeyValueContainer#updateRequiredFiles()}}, when throwing 
INVALID_CONTAINER_STATE or when the restore fails, we should set the state of 
the container to INVALID.
 # When restoring from back up files, can we add a info log message saying 
"update failed, so restoring the container files."
 # Javadoc of computeChecksum is misleading. "Create checksum file of the 
.container file.” -> “Computer checksum of the .container file”.
 # Can we have a file handle to {{.db file}} in {{KeyValueContainerData}} as 
this would be used by every key operation.
 # {{KeyValueContainerUtil#verifyIsNewContainer()}} -> existence of 
containerBasePath should be sufficient condition to fail createContainer.


was (Author: hanishakoneru):
Thanks for the update [~bharatviswa].

Some comments (Many of them are just NITs.):
 # Can we rename the following in {{DatanodeContainerProtocol.proto}}
 CONTAINER_REQUIRED_FILES_CREATE_ERROR -> CONTAINER_FILES_CREATE_ERROR
 CONTAINER_CHECKSUM_FILE_CALCULATE_ERROR -> CONTAINER_CHECKSUM_ERROR
 # {{KeyValueContainer}} Line 131, 133 : Instead of 
"String.valueOf(containerId)", we can reuse containerName
 # {{KeyValueContainer#createRequiredFiles()}} -> there is a possibility that 
the rename of .containerFile succeeds but .checksum file fails. In this 
scenario and all other scenarios where the createContainer fails, we should 
delete the containerBase dir.
 # Can we rename {{createRequiredFiles}} and {{updateRequiredFiles}} to 
createContainerFile and updateContainerFile? Required doesn’t tell us instantly 
what the files are. I think it is ok not to mention that the checksum file is 
also created as part of createContainerFile.
 # Line 225 : "Unable to delete container checksum backup file” -> Since we are 
deleting the temporary file, we can have "Unable to delete container temporary 
checksum file”.
 # {{KeyValueContainer#updateRequiredFiles()}}, when throwing 
INVALID_CONTAINER_STATE or when the restore fails, we should set the state of 
the container to INVALID.
 # When restoring from back up files, can we add a info log message saying 
"update failed, so restoring the container files."
 # Javadoc of computeChecksum is misleading. "Create checksum file of the 
.container file.” -> “Computer checksum of the .container file”.
 # Can we have a file handle to {{.db file}} in {{KeyValueContainerData}} as 
this would be used by every key operation.
 # {{KeyValueContainerUtil#verifyIsNewContainer()}} -> existence of 
containerBasePath should be sufficient condition to fail createContainer.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513051#comment-16513051
 ] 

genericqa commented on HDDS-155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
49s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
33s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
23s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927861/HDDS-155-HDDS-48.05.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 76e0a3f65816 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-48 / 9a5552b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HDDS-155) Implement KeyValueContainer and adopt new disk layout for the containers

2018-06-14 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513041#comment-16513041
 ] 

Hanisha Koneru commented on HDDS-155:
-

Thanks for the update [~bharatviswa].

Some comments (Many of them are just NITs.):
 # Can we rename the following in {{DatanodeContainerProtocol.proto}}
 CONTAINER_REQUIRED_FILES_CREATE_ERROR -> CONTAINER_FILES_CREATE_ERROR
 CONTAINER_CHECKSUM_FILE_CALCULATE_ERROR -> CONTAINER_CHECKSUM_ERROR
 # {{KeyValueContainer}} Line 131, 133 : Instead of 
"String.valueOf(containerId)", we can reuse containerName
 # {{KeyValueContainer#createRequiredFiles()}} -> there is a possibility that 
the rename of .containerFile succeeds but .checksum file fails. In this 
scenario and all other scenarios where the createContainer fails, we should 
delete the containerBase dir.
 # Can we rename {{createRequiredFiles}} and {{updateRequiredFiles}} to 
createContainerFile and updateContainerFile? Required doesn’t tell us instantly 
what the files are. I think it is ok not to mention that the checksum file is 
also created as part of createContainerFile.
 # Line 225 : "Unable to delete container checksum backup file” -> Since we are 
deleting the temporary file, we can have "Unable to delete container temporary 
checksum file”.
 # {{KeyValueContainer#updateRequiredFiles()}}, when throwing 
INVALID_CONTAINER_STATE or when the restore fails, we should set the state of 
the container to INVALID.
 # When restoring from back up files, can we add a info log message saying 
"update failed, so restoring the container files."
 # Javadoc of computeChecksum is misleading. "Create checksum file of the 
.container file.” -> “Computer checksum of the .container file”.
 # Can we have a file handle to {{.db file}} in {{KeyValueContainerData}} as 
this would be used by every key operation.
 # {{KeyValueContainerUtil#verifyIsNewContainer()}} -> existence of 
containerBasePath should be sufficient condition to fail createContainer.

> Implement KeyValueContainer and adopt new disk layout for the containers
> 
>
> Key: HDDS-155
> URL: https://issues.apache.org/jira/browse/HDDS-155
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-155-HDDS-48.00.patch, HDDS-155-HDDS-48.01.patch, 
> HDDS-155-HDDS-48.02.patch, HDDS-155-HDDS-48.03.patch, 
> HDDS-155-HDDS-48.04.patch, HDDS-155-HDDS-48.05.patch
>
>
> This Jira is to add following:
>  # Implement Container Interface
>  # Use new directory layout proposed in the design document.
>  a. Data location (chunks)
>  b. Meta location (DB and .container files)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-168) Add ScmGroupID to Datanode Version File

2018-06-14 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-168:
---

 Summary: Add ScmGroupID to Datanode Version File
 Key: HDDS-168
 URL: https://issues.apache.org/jira/browse/HDDS-168
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru


Add the field {{ScmGroupID}} to Datanode Version file. This field identifies 
the set of SCMs that this datanode talks to, or takes commands from.

This value is not same as Cluster ID – since a cluster can technically have 
more than one SCM group.

Refer to [~anu]'s 
[comment|https://issues.apache.org/jira/browse/HDDS-156?focusedCommentId=16511903=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511903]
 in HDDS-156.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-14 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Fix Version/s: 0.2.1

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-14 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-167:
---
Component/s: Ozone Manager

> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.2.1
>
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13681) Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test failure on Windows

2018-06-14 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13681:
-

 Summary: Fix TestStartup.testNNFailToStartOnReadOnlyNNDir test 
failure on Windows
 Key: HDFS-13681
 URL: https://issues.apache.org/jira/browse/HDFS-13681
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Liang
Assignee: Xiao Liang


org.apache.hadoop.hdfs.server.namenode.TestStartup.testNNFailToStartOnReadOnlyNNDir
 fails on Windows with below error message:

NN dir should be created after NN startup. 
expected:<[F:\short\hadoop-trunk-win\s\hadoop-hdfs-project\hadoop-hdfs\target\test\data\3\dfs\testNNFailToStartOnReadOnlyNNDir\]name>
 but 
was:<[/F:/short/hadoop-trunk-win/s/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/testNNFailToStartOnReadOnlyNNDir/]name>

due to path not processed properly on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-167:
--

 Summary: Rename KeySpaceManager to OzoneManager
 Key: HDDS-167
 URL: https://issues.apache.org/jira/browse/HDDS-167
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some more 
changes needed to complete the rename everywhere e.g.
- command-line
- documentation
- unit tests
- Acceptance tests




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-167) Rename KeySpaceManager to OzoneManager

2018-06-14 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-167 started by Arpit Agarwal.
--
> Rename KeySpaceManager to OzoneManager
> --
>
> Key: HDDS-167
> URL: https://issues.apache.org/jira/browse/HDDS-167
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>
> The Ozone KeySpaceManager daemon was renamed to OzoneManager. There's some 
> more changes needed to complete the rename everywhere e.g.
> - command-line
> - documentation
> - unit tests
> - Acceptance tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513005#comment-16513005
 ] 

Hudson commented on HDFS-13675:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14431 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14431/])
HDFS-13675. Speed up TestDFSAdminWithHA. Contributed by Lukas Majercak. 
(inigoiri: rev 5d7449d2b8bcd0963d172fc30df784279671176f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java


> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch, 
> HDFS-13675_branch-2.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512997#comment-16512997
 ] 

Íñigo Goiri commented on HDFS-13675:


Thanks [~lukmajercak].
Committed to trunk, branch-3.1, branch-2, and branch-2.9.

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch, 
> HDFS-13675_branch-2.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13675:
---
Fix Version/s: 2.9.2
   2.10.0

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch, 
> HDFS-13675_branch-2.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512990#comment-16512990
 ] 

Lukas Majercak commented on HDFS-13675:
---

Added HDFS-13675_branch-2.000.patch

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch, 
> HDFS-13675_branch-2.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13675:
--
Attachment: HDFS-13675_branch-2.000.patch

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch, 
> HDFS-13675_branch-2.000.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Attachment: HDDS-160-HDDS-48.01.patch

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-160) Refactor KeyManager, ChunkManager

2018-06-14 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-160:

Attachment: (was: HDDS-160-HDDS-48.00.patch)

> Refactor KeyManager, ChunkManager
> -
>
> Key: HDDS-160
> URL: https://issues.apache.org/jira/browse/HDDS-160
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-160-HDDS-48.01.patch
>
>
> This Jira is to add new Interface ChunkManager and KeyManager to perform Key 
> and Chunk related operations.
>  # Changes to current existing Keymanager and ChunkManager are:
>  ## Removal of usage of ContainerManager.
>  ## Passing container to method calls.
>  ## Using layOutversion during reading/deleting chunk files.
> Add a new Class KeyValueManager to implement ContainerManager.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11257) Evacuate DN when the remaining is negative

2018-06-14 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512975#comment-16512975
 ] 

Daryn Sharp commented on HDFS-11257:


bq. Not dedicated HDFS clusters. We have some machines where DNs share the disk 
space with other services that have priority.

For the NN to have a policy of blindly accommodating "something else" impinging 
on its assigned disk space is a generally dangerous feature.  The NN cannot 
know/decide if the disk is filling from an abusive job or from another 
"legitimate" tenant of the host.

> Evacuate DN when the remaining is negative
> --
>
> Key: HDFS-11257
> URL: https://issues.apache.org/jira/browse/HDFS-11257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.3
>Reporter: Íñigo Goiri
>Priority: Major
>
> Datanodes have a maximum amount of disk they can use. This is set using 
> {{dfs.datanode.du.reserved}}. For example, if we have a 1TB disk and we set 
> the reserved to 100GB, the DN can only use ~900GB. However, if we fill the DN 
> and later other processes (e.g., logs or co-located services) start to use 
> the disk space, the remaining space will go to a negative and the used 
> storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both 
> approaches require administrator intervention while this is a situation that 
> violates the settings. Note that decommisioning, would be too extreme as it 
> would evacuate all the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-14 Thread Anatoli Shein (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512965#comment-16512965
 ] 

Anatoli Shein commented on HDFS-11520:
--

In the new patch I added tests for canceling all RPCs.

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-14 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: (was: HDFS-11520.002.patch)

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13675:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-14 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: HDFS-11520.003.patch

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.003.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512961#comment-16512961
 ] 

Íñigo Goiri commented on HDFS-13675:


[~lukmajercak], it only applies to branch-3.1 and trunk.
Can you provide one for branch-2?

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11520) libhdfs++: Support cancellation of individual RPC calls in C++ API

2018-06-14 Thread Anatoli Shein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11520:
-
Attachment: HDFS-11520.002.patch

> libhdfs++: Support cancellation of individual RPC calls in C++ API
> --
>
> Key: HDFS-11520
> URL: https://issues.apache.org/jira/browse/HDFS-11520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11520.002.patch, HDFS-11520.002.patch, 
> HDFS-11520.HDFS-8707.000.patch, HDFS-11520.trunk.001.patch
>
>
> RPC calls done by FileSystem methods like Mkdirs, GetFileInfo etc should be 
> individually cancelable without impacting other pending RPC calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13675) Speed up TestDFSAdminWithHA

2018-06-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512957#comment-16512957
 ] 

Íñigo Goiri commented on HDFS-13675:


The failed unit tests are the usual suspects.
+1 on  [^HDFS-13675.001.patch].
Committing.

> Speed up TestDFSAdminWithHA
> ---
>
> Key: HDFS-13675
> URL: https://issues.apache.org/jira/browse/HDFS-13675
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13675.000.patch, HDFS-13675.001.patch
>
>
> Currently, TestDFSAdminWithHA takes about 10 minutes to finish. The main 
> culprits are two tests:
> testListOpenFilesNN1DownNN2Down
> testSetBalancerBandwidthNN1DownNN2Down
>  
> that each take 3~ minutes to finish. This is because they both expect to fail 
> to connect to 2 namenodes, but the client retry policy has way too many 
> retries and exponential backoffs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-146) Refactor the structure of the acceptance tests

2018-06-14 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512956#comment-16512956
 ] 

genericqa commented on HDDS-146:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 22m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
14s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 4 new + 2 unchanged - 2 fixed = 6 
total (was 4) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 7 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m  
7s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927854/HDDS-146.005.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  shadedclient  xml  |
| uname | Linux 918397e1f3b9 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8d4926f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| shellcheck | v0.4.6 |
| shellcheck | 

  1   2   >