[jira] [Updated] (HDFS-14133) Add limit of replication streams on a given node at one time for erasurecode blocks resconstruction

2018-12-06 Thread liaoyuxiangqin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liaoyuxiangqin updated HDFS-14133:
--
Description: 
    When i testing blocks reconstruction triger by data disk offline, i found 
that the Replicated blocks reconstruction speed limit by 
{color:#FF}dfs.namenode.replication.work.multiplier.per.iteration{color}

in namenode compute work, and for a given node the maximum number of outgoing 
replication streams at one time limit by used 
{color:#FF}dfs.namenode.replication.max-streams{color} and 
{color:#FF}dfs.namenode.replication.max-streams-hard-limit{color}, but for 
Erasure Coded blocks only limit total blocks to reconstruct one time by 
{color:#FF}dfs.namenode.replication.work.multiplier.per.iteration{color}

but not limit maximum of outgoing replication streams per node, so that maybe 
cause node load imbalance, i think also ec need add node limit.

> Add limit of replication streams on a given node at one time for erasurecode 
> blocks resconstruction 
> 
>
> Key: HDFS-14133
> URL: https://issues.apache.org/jira/browse/HDFS-14133
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.3.0
>Reporter: liaoyuxiangqin
>Priority: Major
>
>     When i testing blocks reconstruction triger by data disk offline, i found 
> that the Replicated blocks reconstruction speed limit by 
> {color:#FF}dfs.namenode.replication.work.multiplier.per.iteration{color}
> in namenode compute work, and for a given node the maximum number of outgoing 
> replication streams at one time limit by used 
> {color:#FF}dfs.namenode.replication.max-streams{color} and 
> {color:#FF}dfs.namenode.replication.max-streams-hard-limit{color}, but 
> for Erasure Coded blocks only limit total blocks to reconstruct one time by 
> {color:#FF}dfs.namenode.replication.work.multiplier.per.iteration{color}
> but not limit maximum of outgoing replication streams per node, so that maybe 
> cause node load imbalance, i think also ec need add node limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712445#comment-16712445
 ] 

Hadoop QA commented on HDFS-13443:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
5s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13443 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950940/HDFS-13443-016.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  

[jira] [Created] (HDFS-14133) Add limit of replication streams on a given node at one time for erasurecode blocks resconstruction

2018-12-06 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-14133:
-

 Summary: Add limit of replication streams on a given node at one 
time for erasurecode blocks resconstruction 
 Key: HDFS-14133
 URL: https://issues.apache.org/jira/browse/HDFS-14133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.3.0
Reporter: liaoyuxiangqin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712428#comment-16712428
 ] 

Hadoop QA commented on HDDS-908:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 22s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
20s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-908 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950945/HDDS-908.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 585f9e7a1996 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 6c852f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1893/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1893/testReport/ |
| Max. process+thread count | 1329 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1893/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDDS-815) Rename Ozone/HDDS config keys prefixed with 'dfs'

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712421#comment-16712421
 ] 

Hadoop QA commented on HDDS-815:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} root: The patch generated 0 new + 2 unchanged - 2 
fixed = 2 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m  9s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
47s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950943/HDDS-815.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 6c146e625b6f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 6c852f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1891/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1891/testReport/ |
| Max. process+thread count | 1158 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/tools U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1891/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Rename Ozone/HDDS config keys prefixed with 'dfs'
> -
>
> Key: HDDS-815
> URL: https://issues.apache.org/jira/browse/HDDS-815
> Project: Hadoop Distributed Data Store
>  

[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712409#comment-16712409
 ] 

Hadoop QA commented on HDDS-99:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 52s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
44s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-99 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950944/HDDS-99.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 9873ae688b06 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 6c852f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1892/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1892/testReport/ |
| Max. process+thread count | 1155 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common 
hadoop-ozone/dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1892/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  

[jira] [Commented] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712392#comment-16712392
 ] 

Dinesh Chitlangia commented on HDDS-908:


[~ajayydv] Thanks for filing the jira and the patch. I was able to run the test 
locally without the patch. Which test is failing for you? I can retry in my 
setup.

> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-06 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-908:

Attachment: HDDS-908.00.patch

> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-06 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-908:

Status: Patch Available  (was: Open)

> NPE in TestOzoneRpcClient
> -
>
> Key: HDDS-908
> URL: https://issues.apache.org/jira/browse/HDDS-908
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-908.00.patch
>
>
> Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-908) NPE in TestOzoneRpcClient

2018-12-06 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-908:
---

 Summary: NPE in TestOzoneRpcClient
 Key: HDDS-908
 URL: https://issues.apache.org/jira/browse/HDDS-908
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Fix NPE in TestOzoneRpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-815) Rename Ozone/HDDS config keys prefixed with 'dfs'

2018-12-06 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-815:
---
Attachment: HDDS-815.001.patch
Status: Patch Available  (was: Open)

[~anu] , [~arpitagarwal] - Apart from those listed in the description, I found 
a few more keys that should undergo the same modification. I have taken an 
initial attempt at this. Attached patch 001 for review. 

> Rename Ozone/HDDS config keys prefixed with 'dfs'
> -
>
> Key: HDDS-815
> URL: https://issues.apache.org/jira/browse/HDDS-815
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-815.001.patch
>
>
> The following Ozone config keys are prefixed with dfs which is the prefix 
> used by HDFS. Instead we should prefix them with either HDDS or Ozone.
> {code}
> dfs.container.ipc
> dfs.container.ipc.random.port
> dfs.container.ratis.datanode.storage.dir
> dfs.container.ratis.enabled
> dfs.container.ratis.ipc
> dfs.container.ratis.ipc.random.port
> dfs.container.ratis.num.container.op.executors
> dfs.container.ratis.num.write.chunk.threads
> dfs.container.ratis.replication.level
> dfs.container.ratis.rpc.type
> dfs.container.ratis.segment.preallocated.size
> dfs.container.ratis.segment.size
> dfs.container.ratis.statemachinedata.sync.timeout
> dfs.ratis.client.request.max.retries
> dfs.ratis.client.request.retry.interval
> dfs.ratis.client.request.timeout.duration
> dfs.ratis.leader.election.minimum.timeout.duration
> dfs.ratis.server.failure.duration
> dfs.ratis.server.request.timeout.duration
> dfs.ratis.server.retry-cache.timeout.duration
> dfs.ratis.snapshot.threshold
> {code}
> Additionally, _dfs.container.ipc_ should be changed to 
> _dfs.container.ipc.port_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712379#comment-16712379
 ] 

Dinesh Chitlangia edited comment on HDDS-99 at 12/7/18 6:07 AM:


[~linyiqun] Unfortunately, there are many parameters and most of them are 
different set of parameters in different methods :(

Attached patch 002 that remove the unused value from Enum.


was (Author: dineshchitlangia):
[~linyiqun] Unfortunately, there are many parameters and most of them are 
different set of parameters in different methods :(

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-99:
--
Attachment: HDDS-99.002.patch

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712379#comment-16712379
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

[~linyiqun] Unfortunately, there are many parameters and most of them are 
different set of parameters in different methods :(

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712375#comment-16712375
 ] 

Yiqun Lin edited comment on HDDS-99 at 12/7/18 5:59 AM:


{quote}
When I was writing this piece, I initially felt the same. However, I noticed 
that even with this new approach we are not really reducing the line count as, 
no matter what approach we take, we will have to specify the  pair. The 
only realistic duplicate line is when we initialize an audit map. Thus, I feel 
we can write a function to just get an empty map, else I think we can skip this 
part because the multiple lines of type auditMap.put("key",value) appear to be 
more readable to me. Let me know what you think.
{quote}
[~dineshchitlangia], actually here I was intended to reduce lines of 
{{auditMap.put("key",value)}} if there are many parameters. I'm fine if you 
think this will look more readable for not changing this. :).
 


was (Author: linyiqun):
{quote}
When I was writing this piece, I initially felt the same. However, I noticed 
that even with this new approach we are not really reducing the line count as, 
no matter what approach we take, we will have to specify the  pair. The 
only realistic duplicate line is when we initialize an audit map. Thus, I feel 
we can write a function to just get an empty map, else I think we can skip this 
part because the multiple lines of type auditMap.put("key",value) appear to be 
more readable to me. Let me know what you think.
{quote}
[~dineshchitlangia], actually here I was intended to reduce lines of 
{{auditMap.put("key",value)}} if there are many parameters. I'm fine if you 
think this will look more readable. :).
 

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712375#comment-16712375
 ] 

Yiqun Lin commented on HDDS-99:
---

{quote}
When I was writing this piece, I initially felt the same. However, I noticed 
that even with this new approach we are not really reducing the line count as, 
no matter what approach we take, we will have to specify the  pair. The 
only realistic duplicate line is when we initialize an audit map. Thus, I feel 
we can write a function to just get an empty map, else I think we can skip this 
part because the multiple lines of type auditMap.put("key",value) appear to be 
more readable to me. Let me know what you think.
{quote}
[~dineshchitlangia], actually here I was intended to reduce lines of 
{{auditMap.put("key",value)}} if there are many parameters. I'm fine if you 
think this will look more readable. :).
 

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14109) Improve hdfs auditlog format and support federation friendly

2018-12-06 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712348#comment-16712348
 ] 

He Xiaoqiao commented on HDFS-14109:


Thanks [~xkrogen],[~kihwal] for discussing this issue.
{quote}I think as with most recent additions to the audit log, it should be 
protected by a config which defaults to off. In particular, in an environment 
using only a single namespace, we definitely don't want this information.{quote}
+1, only for federation with multi-namespace, and switch off default by a 
config.
{quote}People deal with logs from multiple systems today without having to 
insert the source identity in every single log line. {quote}
Actually, there are multiple system can deal with mass logs data. my opinion is:
1) the lowest-cost method to deal with logs. e.g. 10B audit-log records may 
cost our amount computing resource if relay with other system.
2) another point, I consider this is scope of hdfs rather than push to other 
system.
Maybe I missing some information, please give your feedback if there are 
something wrong.
Thanks [~xkrogen],  [~kihwal] again.

> Improve hdfs auditlog format and support federation friendly
> 
>
> Key: HDFS-14109
> URL: https://issues.apache.org/jira/browse/HDFS-14109
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14109.patch
>
>
> The following auditlog format does not well meet requirement for federation 
> arch currently. Since some case we need to aggregate all namespace audit log, 
> if there are some common path request(e.g. /tmp, /user/ etc. some path may 
> not appear in mountTable, but the path is very real), we will have no idea to 
> split them that which namespace it request to. So I propose add column 
> {{nsid}} to support federation more friendly.  
> {quote}2018-11-27 13:20:30,028 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs/hostn...@realm.com (auth:KERBEROS)  ip=/10.1.1.2 cmd=getfileinfo 
> src=/path   dst=null        perm=null       proto=rpc       clientName=null
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-06 Thread Mohammad Arshad (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712340#comment-16712340
 ] 

Mohammad Arshad commented on HDFS-13443:


Handled above two comments.
Thanks [~elgoiri] for the reviews.

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712344#comment-16712344
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

[~linyiqun] Thank you for reviewing the patch.
{quote} * Audit action {{IN_CHILL_MODE}} is not used actually, looks like it 
was duplicate to {{IS_CHILL_MODE}}.{quote}
Sounds good. I have removed {{IS_CHILL_MODE}} and used {{IN_CHILL_MODE}} where 
needed.
{quote}Can we define a new function to construct the audit map with given 
parameters? As I see we did the duplicate logic in every where.
{quote}
When I was writing this piece, I initially felt the same. However, I noticed 
that even with this new approach we are not really reducing the line count as, 
no matter what approach we take, we will have to specify the  pair. The 
only realistic duplicate line is when we initialize an audit map. Thus, I feel 
we can write a function to just get an empty map, else I think we can skip this 
part because the multiple lines of type {{auditMap.put("key",value)}} appear to 
be more readable to me. Let me know what you think.

 

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712339#comment-16712339
 ] 

Hadoop QA commented on HDFS-14001:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  6s{color} | {color:orange} root: The patch generated 4 new + 425 unchanged 
- 0 fixed = 429 total (was 425) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950929/HDFS-14001.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 721bd32b4508 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c852f2 |

[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-06 Thread Mohammad Arshad (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HDFS-13443:
---
Attachment: HDFS-13443-016.patch

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2018-12-06 Thread Greg Senia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712322#comment-16712322
 ] 

Greg Senia commented on HDFS-9276:
--

any chance this can be backported to hadoop 2.7.x as Spark is still packaging 
hadoop 2.7.x as primary included hadoop packaging which means spark running on 
hadoop is broken as classloader issues occur and pick up this defect.

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha1, 2.8.2
>
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, HDFS-9276.11.patch, 
> HDFS-9276.12.patch, HDFS-9276.13.patch, HDFS-9276.14.patch, 
> HDFS-9276.15.patch, HDFS-9276.16.patch, HDFS-9276.17.patch, 
> HDFS-9276.18.patch, HDFS-9276.19.patch, HDFS-9276.20.patch, 
> HDFSReadLoop.scala, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-06 Thread xiangheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712297#comment-16712297
 ] 

xiangheng commented on HDFS-14130:
--

Hi,[+Konstantin 
Shvachko+|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=shv],If I 
understand correctly,We should start up observerNode and make it work in an 
auto-failover environment, and  ZKFC failover controller will need to recognize 
the observer state. Is it?

or ObserverNodes can automatic failover to SBNs.

I'm doing something about failover,Maybe you can assignee this issue to 
me,thank you very much.

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-06 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng updated HDFS-14130:
-
Comment: was deleted

(was: Hi,[+Konstantin 
Shvachko+|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=shv],If I 
understand correctly,We should start up observerNode and make it work in an 
auto-failover environment, and  ZKFC failover controller will need to recognize 
the observer state. Is it?

or ObserverNodes can automatic failover to SBNs.

I'm doing something about failover,Maybe you can assignee this issue to 
me,thank you very much.)

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-06 Thread xiangheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712298#comment-16712298
 ] 

xiangheng commented on HDFS-14130:
--

Hi,[+Konstantin 
Shvachko+|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=shv],If I 
understand correctly,We should start up observerNode and make it work in an 
auto-failover environment, and  ZKFC failover controller will need to recognize 
the observer state. Is it?

or ObserverNodes can automatic failover to SBNs?

I'm doing something about failover,Maybe you can assignee this issue to 
me,thank you very much.

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712294#comment-16712294
 ] 

Yiqun Lin edited comment on HDDS-99 at 12/7/18 3:29 AM:


Thanks [~dineshchitlangia] for working on this! Almost looks good to me, some 
minor comments from me:
 * Audit action {{IN_CHILL_MODE}} is not used actually, looks like it was 
duplicate to {{IS_CHILL_MODE}}.
 * Can we define a new function to construct the audit map with given 
parameters? As I see we did the duplicate logic in every where. New function 
can be like:
{noformat}
  private Map constructAuditMap(String... params) {
Map auditMap = Maps.newHashMap();
for(int i = 0; i < params.length ; i+=2) {
  auditMap.put(params[i], params[i+1]);
}
return auditMap;
  }
{noformat}
For example, using this, listContainer's audit map will be simplified as 
{{auditMap = constructAuditMap("startContainerID", 
String.valueOf(startContainerID), "count", String.valueOf(count))}}


was (Author: linyiqun):
Thanks [~dineshchitlangia] for working on this! Almost looks good to me, some 
minor comments from me:
 * Audit action {{IN_CHILL_MODE}} is not used actually, looks like it was 
duplicate to {{IS_CHILL_MODE}}.
 * Can we define a new function to construct the audit map with given 
parameters? As I see we did the duplicate logic in every where. New function 
can be like:
{noformat}
  private Map constructAuditMap(String... params) {
Map auditMap = Maps.newHashMap();
for(int i = 0; i < params.length ; i+=2) {
  auditMap.put(params[i], params[i+1]);
}
return auditMap;
  }
{noformat}
Using this, listContainer's audit map will be simplified as {{auditMap = 
constructAuditMap("startContainerID", String.valueOf(startContainerID), 
"count", String.valueOf(count))}}

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712294#comment-16712294
 ] 

Yiqun Lin commented on HDDS-99:
---

Thanks [~dineshchitlangia] for working on this! Almost looks good to me, some 
minor comments from me:
 * Audit action {{IN_CHILL_MODE}} is not used actually, looks like it was 
duplicate to {{IS_CHILL_MODE}}.
 * Can we define a new function to construct the audit map with given 
parameters? As I see we did the duplicate logic in every where. New function 
can be like:
{noformat}
  private Map constructAuditMap(String... params) {
Map auditMap = Maps.newHashMap();
for(int i = 0; i < params.length ; i+=2) {
  auditMap.put(params[i], params[i+1]);
}
return auditMap;
  }
{noformat}
Using this, listContainer's audit map will be simplified as {{auditMap = 
constructAuditMap("startContainerID", String.valueOf(startContainerID), 
"count", String.valueOf(count))}}

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Status: Patch Available  (was: Open)

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch, 
> HDFS-14001.003.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Attachment: HDFS-14001.003.patch

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch, 
> HDFS-14001.003.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712256#comment-16712256
 ] 

Virajith Jalaparti commented on HDFS-14001:
---

{quote}When bootstrapping, if the alias map exists, it currently deletes it 
right away.{quote}

Fixed this in  [^HDFS-14001.003.patch] by using {{Storage.confirmFormat}} which 
relies on the {{-force}} and {{-interactive}} flags to delete the aliasmap 
directory or not. This is consistent with how this is handled for the image 
directories. The test for this is added in 
{{ITestProvidedImplementation#testBootstrapAliasMap}}.

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch, 
> HDFS-14001.003.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Status: Open  (was: Patch Available)

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch, 
> HDFS-14001.003.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712212#comment-16712212
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

failures unrelated to patch

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712215#comment-16712215
 ] 

Hadoop QA commented on HDDS-879:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 31s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
26s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950924/HDDS-879.05.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 71caa0759883 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 6c852f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1890/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1890/testReport/ |
| Max. process+thread count | 1269 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1890/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-879.01.patch, HDDS-879.02.patch, HDDS-879.03.patch, 
> HDDS-879.04.patch, HDDS-879.05.patch
>

[jira] [Commented] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712176#comment-16712176
 ] 

Bharat Viswanadham commented on HDDS-879:
-

Fixed Jenkins reported issues in patch v05.

> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-879.01.patch, HDDS-879.02.patch, HDDS-879.03.patch, 
> HDDS-879.04.patch, HDDS-879.05.patch
>
>
> This Jira is to add initiate multipart upload.
> InitiateMultipart upload does 2 things:
>  # Create an entry in the open table for this key
>  # Add multipartInfo information for this key in to multipartinfo table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-879:

Attachment: HDDS-879.05.patch

> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-879.01.patch, HDDS-879.02.patch, HDDS-879.03.patch, 
> HDDS-879.04.patch, HDDS-879.05.patch
>
>
> This Jira is to add initiate multipart upload.
> InitiateMultipart upload does 2 things:
>  # Create an entry in the open table for this key
>  # Add multipartInfo information for this key in to multipartinfo table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-12-06 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-14083:
--
Component/s: test

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HDFS-14083
> URL: https://issues.apache.org/jira/browse/HDFS-14083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native, test
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> {code}
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> {code}
> h3. Root cause
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which 
> is hitting this
> exception.
> h3. Fix:
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-858) Start a Standalone Ratis Server on OM

2018-12-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-858:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-892) Parse aws v2 headers without spaces in Ozone s3 gateway

2018-12-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712157#comment-16712157
 ] 

Hudson commented on HDDS-892:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15573 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15573/])
HDDS-892. Parse aws v2 headers without spaces in Ozone s3 gateway. (bharat: rev 
6c852f2a3757129491c21a9ba3b315a7a00c0c28)
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/header/TestAuthorizationHeaderV4.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AuthorizationHeaderV4.java


> Parse aws v2 headers without spaces in Ozone s3 gateway 
> 
>
> Key: HDDS-892
> URL: https://issues.apache.org/jira/browse/HDDS-892
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-892.001.patch, HDDS-892.002.patch
>
>
> As of now s3g can't be used with s3cmd (which is seems to be an other popular 
> s3 cli) as it doesn't add any space to the authorization headers.
> Example command:
> ```
> s3cmd ls --no-ssl --host localhost:9878 --host-bucket 
> '%(bucket)s.localhost:9878' s3://b1qwe
> ```
> Result:
> {code}
> ERROR: S3 error: 404 (AuthorizationHeaderMalformed): The authorization header 
> you provided is invalid.
> {code}
> With the debug option the header could be checked:
> {code}
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(b1qwe): b1qwe.localhost:9878
> DEBUG: ConnMan.get(): creating new connection: http://b1qwe.localhost:9878
> DEBUG: non-proxied HTTPConnection(b1qwe.localhost, 9878)
> DEBUG: format_uri(): /?delimiter=%2F
> DEBUG: Sending request method_string='GET', uri='/?delimiter=%2F', 
> headers={'x-amz-date': '20181203T103324Z', 'Authorization': 'AWS4-HMAC-SHA256 
> Credential=AKIAJGCRGUGIL3DUFDHA/20181203/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=762a2dbe21a01fdd1699fb71336ee06b53302f64cd099055ca54088a9ecfc787',
>  'x-amz-content-sha256': 
> 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 
> bytes)
> DEBUG: ConnMan.put(): connection put back to pool 
> (http://b1qwe.localhost:9878#1)
> {code}
> The problem here is that we have no spaces between the segments of the 
> authorization headers but we expect them in the AuthorizationHeaderV4 class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712159#comment-16712159
 ] 

Hadoop QA commented on HDFS-14084:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 134 unchanged - 0 fixed = 138 total (was 134) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950902/HDFS-14084.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f0dd5599be34 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 019836b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25726/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25726/testReport/ |
| Max. process+thread count | 1688 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common 

[jira] [Updated] (HDDS-892) Parse aws v2 headers without spaces in Ozone s3 gateway

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-892:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thank You, [~elek] for the contribution.

I have committed to trunk.

> Parse aws v2 headers without spaces in Ozone s3 gateway 
> 
>
> Key: HDDS-892
> URL: https://issues.apache.org/jira/browse/HDDS-892
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-892.001.patch, HDDS-892.002.patch
>
>
> As of now s3g can't be used with s3cmd (which is seems to be an other popular 
> s3 cli) as it doesn't add any space to the authorization headers.
> Example command:
> ```
> s3cmd ls --no-ssl --host localhost:9878 --host-bucket 
> '%(bucket)s.localhost:9878' s3://b1qwe
> ```
> Result:
> {code}
> ERROR: S3 error: 404 (AuthorizationHeaderMalformed): The authorization header 
> you provided is invalid.
> {code}
> With the debug option the header could be checked:
> {code}
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(b1qwe): b1qwe.localhost:9878
> DEBUG: ConnMan.get(): creating new connection: http://b1qwe.localhost:9878
> DEBUG: non-proxied HTTPConnection(b1qwe.localhost, 9878)
> DEBUG: format_uri(): /?delimiter=%2F
> DEBUG: Sending request method_string='GET', uri='/?delimiter=%2F', 
> headers={'x-amz-date': '20181203T103324Z', 'Authorization': 'AWS4-HMAC-SHA256 
> Credential=AKIAJGCRGUGIL3DUFDHA/20181203/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=762a2dbe21a01fdd1699fb71336ee06b53302f64cd099055ca54088a9ecfc787',
>  'x-amz-content-sha256': 
> 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 
> bytes)
> DEBUG: ConnMan.put(): connection put back to pool 
> (http://b1qwe.localhost:9878#1)
> {code}
> The problem here is that we have no spaces between the segments of the 
> authorization headers but we expect them in the AuthorizationHeaderV4 class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-892) Parse aws v2 headers without spaces in Ozone s3 gateway

2018-12-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712146#comment-16712146
 ] 

Bharat Viswanadham commented on HDDS-892:
-

Thank You [~elek] for the updated patch.

+1 LGTM.

I will commit this shortly.

> Parse aws v2 headers without spaces in Ozone s3 gateway 
> 
>
> Key: HDDS-892
> URL: https://issues.apache.org/jira/browse/HDDS-892
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-892.001.patch, HDDS-892.002.patch
>
>
> As of now s3g can't be used with s3cmd (which is seems to be an other popular 
> s3 cli) as it doesn't add any space to the authorization headers.
> Example command:
> ```
> s3cmd ls --no-ssl --host localhost:9878 --host-bucket 
> '%(bucket)s.localhost:9878' s3://b1qwe
> ```
> Result:
> {code}
> ERROR: S3 error: 404 (AuthorizationHeaderMalformed): The authorization header 
> you provided is invalid.
> {code}
> With the debug option the header could be checked:
> {code}
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(b1qwe): b1qwe.localhost:9878
> DEBUG: ConnMan.get(): creating new connection: http://b1qwe.localhost:9878
> DEBUG: non-proxied HTTPConnection(b1qwe.localhost, 9878)
> DEBUG: format_uri(): /?delimiter=%2F
> DEBUG: Sending request method_string='GET', uri='/?delimiter=%2F', 
> headers={'x-amz-date': '20181203T103324Z', 'Authorization': 'AWS4-HMAC-SHA256 
> Credential=AKIAJGCRGUGIL3DUFDHA/20181203/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=762a2dbe21a01fdd1699fb71336ee06b53302f64cd099055ca54088a9ecfc787',
>  'x-amz-content-sha256': 
> 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 
> bytes)
> DEBUG: ConnMan.put(): connection put back to pool 
> (http://b1qwe.localhost:9878#1)
> {code}
> The problem here is that we have no spaces between the segments of the 
> authorization headers but we expect them in the AuthorizationHeaderV4 class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712079#comment-16712079
 ] 

Erik Krogen commented on HDFS-14084:


One additional advantage to doing it at the FileSystem level is that it can be 
exported through standard means. If HADOOP-15125 is completed to finish 
replacing {{FileSystem#Statistics}} with {{StorageStatistics}}, then this 
information is exported as MapReduce counters, and can easily be viewed 
debugging task/job slowness.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-902) MultipartUpload: S3 API for uploading a part file

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-902:

Attachment: HDDS-902.00.patch

> MultipartUpload: S3 API for uploading a part file
> -
>
> Key: HDDS-902
> URL: https://issues.apache.org/jira/browse/HDDS-902
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-902.00.patch
>
>
> This Jira is created to track the work required for Uploading a part.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712129#comment-16712129
 ] 

Hadoop QA commented on HDDS-99:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 13s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.container.TestContainerReplication |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-99 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950901/HDDS-99.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  
shellcheck  |
| uname | Linux 4df096b0ea2a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 019836b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1889/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1889/testReport/ |
| Max. process+thread count | 1286 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-ozone/common 
hadoop-ozone/dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1889/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding SCM Audit log
> 
>
> 

[jira] [Commented] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712088#comment-16712088
 ] 

Hadoop QA commented on HDFS-14001:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 4 new + 425 unchanged 
- 0 fixed = 429 total (was 425) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950880/HDFS-14001.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux da7c737a463f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c03024a |
| maven | 

[jira] [Created] (HDFS-14132) Add BlockLocation.isStriped() to determine if block is replicated or Striped

2018-12-06 Thread Shweta (JIRA)
Shweta created HDFS-14132:
-

 Summary: Add BlockLocation.isStriped() to determine if block is 
replicated or Striped
 Key: HDFS-14132
 URL: https://issues.apache.org/jira/browse/HDFS-14132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Shweta
Assignee: Shweta


Impala uses FileSystem#getBlockLocation to get block locations. We can add 
isStriped() method for it to easier determine the block is belonged to 
replicated file or striped file.

In HDFS, this isStriped information is already in 
HdfsBlockLocation#LocatedBlock#isStriped(), adding this method to BlockLocation 
does not introduce space overhead.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712083#comment-16712083
 ] 

Pranay Singh edited comment on HDFS-14084 at 12/6/18 10:19 PM:
---

Implemented the stats using RpcDetailedMetrics, the metrics are updated in RPC 
Engine for protobuf based RPCs in the Client side, and the collected 
RpcDetailedMetrics are displayed  as a part of debug log message.

[~xkrogen] I'll upload another patch that uses StorageStatistics at FileSystem 
level, this was the one that I had been working on as suggested by [~elgoiri]

 

 


was (Author: pranay_singh):
Implemented the stats using RpcDetailedMetrics, the metrics are updated in RPC 
Engine for protobuf based RPCs in the Client side, and the collected 
RpcDetailedMetrics are displayed  as a part of debug log message.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Status: In Progress  (was: Patch Available)

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Status: Patch Available  (was: In Progress)

Implemented the stats using RpcDetailedMetrics, the metrics are updated in RPC 
Engine for protobuf based RPCs in the Client side, and the collected 
RpcDetailedMetrics are displayed  as a part of debug log message.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HDFS-14131:
---

Assignee: Chao Sun

> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, startup procedure, give an example of recommended 
> deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14084:

Attachment: HDFS-14084.002.patch

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712068#comment-16712068
 ] 

Wei-Chiu Chuang commented on HDFS-14084:


(deleted my previous comment because I was talking to Pranay offline and then 
realized I didn't understand what I was talking about)

For most part, I am interested in the distribution of latency number. For 
example, 50%-tile,90%-tile,99%-tile, of OP_DELETE, over some period of time, 
say the past 1, 5 minutes. 

We already have something similar at RPC server level (via config key 
dfs.metrics.percentiles.intervals), just that we don't have that in the client 
side.

Perhaps those metrics can be exported periodically, say 1 minute apart, in the 
debug log.

As I went through the thread, one debate in this thread is whether it should be 
done at RPC client level or file system level. Either way has its own 
advantage. HBase sometimes use dfsclient instead of file system, so if it is 
done only at file system level, hbase won't be able to troubleshoot performance 
issue. Doing it at file system level makes it generic and applicable to HDFS as 
well as webhdfs clients.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-12-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712073#comment-16712073
 ] 

Hudson commented on HDDS-858:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15572 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15572/])
HDDS-858. Start a Standalone Ratis Server on OM (hanishakoneru: rev 
019836b113577e9f8ec7c7a6c0d31bc0016f9395)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/package-info.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java


> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712072#comment-16712072
 ] 

Chao Sun commented on HDFS-14131:
-

OK. Let me come up with a doc on this.

> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, startup procedure, give an example of recommended 
> deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-99) Adding SCM Audit log

2018-12-06 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-99:
--
Attachment: HDDS-99.001.patch
Status: Patch Available  (was: In Progress)

[~anu], [~xyao], [~ajayydv] - Attached patch 001 for your review. Thank you.

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712065#comment-16712065
 ] 

Íñigo Goiri commented on HDFS-14131:


Thanks [~csun] for volunteering.
I think it should be at the same level as {{HDFSHighAvailabilityWithQJM.md}}.

> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, startup procedure, give an example of recommended 
> deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712061#comment-16712061
 ] 

Chao Sun commented on HDFS-14131:
-

I can work on this today if nobody takes the task. Should we put this in a 
markdown file like [HDFS 
commands|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md]?

> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, startup procedure, give an example of recommended 
> deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-12-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712057#comment-16712057
 ] 

Hudson commented on HDDS-880:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15571 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15571/])
HDDS-880. Create api for ACL handling in Ozone. (Contributed by Ajay (ajay: rev 
8d882c3786f48f57cfbe792ac0b8c9f9c8359abb)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/IOzoneObj.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/RequestContext.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/package-info.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneAclException.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/OzoneObj.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/acl/IAccessAuthorizer.java


> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch, 
> HDDS-880.03.patch, HDDS-880.04.patch, HDDS-880.05.patch, HDDS-880.06.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14084:
---
Comment: was deleted

(was: bq. I'm not sure on the best way to get the information out... logging 
(trace leve?) might be too verbose but I don't see many more ways.

IMO logging latency number at trace level (for each op?) doesn't work too well 
-- based on my experience trying to measure NameNode RPC latency.

For most part I am interested in the distribution of latency number. For 
example, 50%-tile,90%-tile,99%-tile, of OP_DELETE, over some period of time, 
say the past 30 seconds, 5 minutes.

We already have something similar at RPC server level (via config key 
dfs.metrics.percentiles.intervals), just that we don't have that in the client 
side.)

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-12-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712054#comment-16712054
 ] 

Hanisha Koneru commented on HDDS-858:
-

Thank you [~anu].
Committed to trunk.

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712046#comment-16712046
 ] 

Wei-Chiu Chuang edited comment on HDFS-14084 at 12/6/18 9:34 PM:
-

bq. I'm not sure on the best way to get the information out... logging (trace 
leve?) might be too verbose but I don't see many more ways.

IMO logging latency number at trace level (for each op?) doesn't work too well 
-- based on my experience trying to measure NameNode RPC latency.

For most part I am interested in the distribution of latency number. For 
example, 50%-tile,90%-tile,99%-tile, of OP_DELETE, over some period of time, 
say the past 30 seconds, 5 minutes.

We already have something similar at RPC server level (via config key 
dfs.metrics.percentiles.intervals), just that we don't have that in the client 
side.


was (Author: jojochuang):
bq. I'm not sure on the best way to get the information out... logging (trace 
leve?) might be too verbose but I don't see many more ways.

IMO logging latency number at trace level (for each op?) doesn't work too well.

For most part I am interested in the distribution of latency number. For 
example, 50%-tile,90%-tile,99%-tile, of OP_DELETE, over some period of time, 
say the past 30 seconds, 5 minutes.

We already have something similar at RPC server level (via config key 
dfs.metrics.percentiles.intervals), just that we don't have that in the client 
side.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712046#comment-16712046
 ] 

Wei-Chiu Chuang commented on HDFS-14084:


bq. I'm not sure on the best way to get the information out... logging (trace 
leve?) might be too verbose but I don't see many more ways.

IMO logging latency number at trace level (for each op?) doesn't work too well.

For most part I am interested in the distribution of latency number. For 
example, 50%-tile,90%-tile,99%-tile, of OP_DELETE, over some period of time, 
say the past 30 seconds, 5 minutes.

We already have something similar at RPC server level (via config key 
dfs.metrics.percentiles.intervals), just that we don't have that in the client 
side.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-880) Create api for ACL handling in Ozone

2018-12-06 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-880:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~jnp], [~spolavarapu], [~abhayk] thanks for your comments, committed it to 
trunk.

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch, 
> HDDS-880.03.patch, HDDS-880.04.patch, HDDS-880.05.patch, HDDS-880.06.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-12-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712030#comment-16712030
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


Ping. What's the status now?

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-901) MultipartUpload: S3 API for Initiate multipart upload

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-901:

Attachment: HDDS-901.01.patch

> MultipartUpload: S3 API for Initiate multipart upload
> -
>
> Key: HDDS-901
> URL: https://issues.apache.org/jira/browse/HDDS-901
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-901.00.patch, HDDS-901.01.patch
>
>
> This Jira is to implement this API.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712018#comment-16712018
 ] 

Íñigo Goiri commented on HDFS-14001:


When bootstrapping, if the alias map exists, it currently deletes it right away.
I think we should check the force flag and if it's not there just log the error.

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712008#comment-16712008
 ] 

Kihwal Lee commented on HDFS-14081:
---

{{metaSave()}} needs to be made to skip the parts that are irrelevant to 
standby namenode.  Let's go over one by one.

|| metaSave section || relevant to standby NN? || note ||
|Live Datanodes  | yes| |
|Dead Datanodes| yes| |
|Blocks waiting for reconstruction| no| uninitialized|
|Blocks currently missing| no| uninitialized|
|Mis-replicated blocks that have been postponed| no| uninitialized|
|Blocks being reconstructed (pendingReconstruction)| no| uninitialized|
|invalidateBlocks| no| uninitialized|
|corruptReplicas| no| uninitialized|
|Individual datanode repl/invalidation status(DataNodeDescriptor)| no| 
unpopulated |

Basically, the two numbers, the number of live and dead datanodes, are the only 
thing that can be printed from a standby namenode.   Since simply printing node 
counts has little value, it is better to disable metaSave on standby.  
Alternative is to add more standby-specific content that is useful. E.g. 
PendingDataNodeMessages are populated only on standby and the content might be 
useful in certain situations.


> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch, HDFS-14081.004.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711983#comment-16711983
 ] 

Hadoop QA commented on HDDS-879:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} root: The patch generated 3 new + 2 unchanged - 
0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
44s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.codec.TestOmMultipartKeyInfoCodec |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950883/HDDS-879.04.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 0668db08c148 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 343aaea |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1888/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1888/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1888/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1888/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 198 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1888/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue 

[jira] [Commented] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711958#comment-16711958
 ] 

Bharat Viswanadham commented on HDDS-879:
-

Rebased patch on latest trunk.

Now it is ready for review.

 

> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-879.01.patch, HDDS-879.02.patch, HDDS-879.03.patch, 
> HDDS-879.04.patch
>
>
> This Jira is to add initiate multipart upload.
> InitiateMultipart upload does 2 things:
>  # Create an entry in the open table for this key
>  # Add multipartInfo information for this key in to multipartinfo table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-879) MultipartUpload: Add InitiateMultipartUpload in ozone

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-879:

Attachment: HDDS-879.04.patch

> MultipartUpload: Add InitiateMultipartUpload in ozone
> -
>
> Key: HDDS-879
> URL: https://issues.apache.org/jira/browse/HDDS-879
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-879.01.patch, HDDS-879.02.patch, HDDS-879.03.patch, 
> HDDS-879.04.patch
>
>
> This Jira is to add initiate multipart upload.
> InitiateMultipart upload does 2 things:
>  # Create an entry in the open table for this key
>  # Add multipartInfo information for this key in to multipartinfo table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711914#comment-16711914
 ] 

Virajith Jalaparti commented on HDFS-14001:
---

Fixed checkstyle, findbugs issues and test failures in  
[^HDFS-14001.002.patch]. [~elgoiri] can you take a look? 

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-880) Create api for ACL handling in Ozone

2018-12-06 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711937#comment-16711937
 ] 

Jitendra Nath Pandey commented on HDDS-880:
---

+1 for the latest patch.

> Create api for ACL handling in Ozone
> 
>
> Key: HDDS-880
> URL: https://issues.apache.org/jira/browse/HDDS-880
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDDS-880.00.patch, HDDS-880.01.patch, HDDS-880.02.patch, 
> HDDS-880.03.patch, HDDS-880.04.patch, HDDS-880.05.patch, HDDS-880.06.patch
>
>
> Create api for ACL handling in Ozone,



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-864:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thank You, [~elek] for the contribution.

I have committed this to the trunk.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711936#comment-16711936
 ] 

Hudson commented on HDDS-864:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15569 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15569/])
HDDS-864. Use strongly typed codec implementations for the tables of the 
(bharat: rev 343aaea2d12da0154273ff5f6eedc1ea5fae84cb)
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Codec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/CodecRegistry.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmBucketInfoCodec.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/VolumeListCodec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/package-info.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmVolumeArgsCodec.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmKeyInfoCodec.java


> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711931#comment-16711931
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Thank You, [~elek] for the updated patch.

Test failure is not related to the patch. I think TestKeys.java is failing on 
Jenkins, as I see on other jiras too. I think it is a flaky unit test. 

+1 LGTM.

I will commit this shortly.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14131:
---
Description: 
The documentation should give an overview of the feature, explain configuration 
parameters, startup procedure, give an example of recommended deployment.
It should include the description of Fast Edits Tailing HDFS-13150, as this is 
required for efficient reads from Observer.

  was:
The documentation should give an overview of the feature, explain configuration 
parameters, give an example of recommended deployment.
It should include the description of Fast Edits Tailing HDFS-13150, as this is 
required for efficient reads from Observer.


> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, startup procedure, give an example of recommended 
> deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Status: Patch Available  (was: Open)

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Attachment: HDFS-14001.002.patch

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch, HDFS-14001.002.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14001) [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap

2018-12-06 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-14001:
--
Status: Open  (was: Patch Available)

> [PROVIDED Storage] bootstrapStandby should manage the InMemoryAliasMap
> --
>
> Key: HDFS-14001
> URL: https://issues.apache.org/jira/browse/HDFS-14001
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-14001.001.patch
>
>
> Currently, we generate the fsimage and the alias map in one machine. When we 
> start the other NNs, we use bootstrapStandby to propagate the fsimage. 
> However, we need to copy the Alias Map by hand. We should copy also the Alias 
> Map as part of this process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711893#comment-16711893
 ] 

Íñigo Goiri commented on HDFS-14131:


Thanks [~shv] for working on this.
Could we add a quick skeleton?
I would like to have something basic before merging.
We can extend it in the future.

> Create user guide for "Consistent reads from Observer" feature.
> ---
>
> Key: HDFS-14131
> URL: https://issues.apache.org/jira/browse/HDFS-14131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> The documentation should give an overview of the feature, explain 
> configuration parameters, give an example of recommended deployment.
> It should include the description of Fast Edits Tailing HDFS-13150, as this 
> is required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-12-06 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711890#comment-16711890
 ] 

Konstantin Shvachko commented on HDFS-12943:


Hey [~goiri], see HDFS-14131 - the documentation jira.

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14131) Create user guide for "Consistent reads from Observer" feature.

2018-12-06 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14131:
--

 Summary: Create user guide for "Consistent reads from Observer" 
feature.
 Key: HDFS-14131
 URL: https://issues.apache.org/jira/browse/HDFS-14131
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko


The documentation should give an overview of the feature, explain configuration 
parameters, give an example of recommended deployment.
It should include the description of Fast Edits Tailing HDFS-13150, as this is 
required for efficient reads from Observer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-06 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14130:
--

 Summary: Make ZKFC ObserverNode aware
 Key: HDFS-14130
 URL: https://issues.apache.org/jira/browse/HDFS-14130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko


Need to fix automatic failover with ZKFC. Currently it does not know about 
ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12861) Track speed in DFSClient

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711860#comment-16711860
 ] 

Hadoop QA commented on HDFS-12861:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12861 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12861 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918330/HDFS-12861-10-april-18.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25724/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-10-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-06 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711855#comment-16711855
 ] 

Jitendra Nath Pandey commented on HDDS-870:
---

The test TestFailureHandlingByClient seems to be flaky.
{code:java}
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 137.352 
s <<< FAILURE! - in 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient

[ERROR] 
testMultiBlockWritesWithIntermittentDnFailures(org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient)
  Time elapsed: 83.03 s  <<< ERROR!

org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Unable to find the chunk file. chunk info 
ChunkInfo{chunkName='6fdfe7e7b129c48ecbf39efa5de52c8f_stream_318544a3-ec1d-4a4f-b280-7d2a53ca30bf_chunk_1,
 offset=0, len=1048576}

at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:495)

at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.readChunk(ContainerProtocolCalls.java:221)

at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.readChunkFromContainer(ChunkInputStream.java:211)

at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.prepareRead(ChunkInputStream.java:175)

at 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.read(ChunkInputStream.java:130)

at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.read(ChunkGroupInputStream.java:231)

at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.read(ChunkGroupInputStream.java:125)

at 
org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:49)

at java.io.InputStream.read(InputStream.java:101)

at 
org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:656)

at 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.validateData(TestFailureHandlingByClient.java:246)

at 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithIntermittentDnFailures(TestFailureHandlingByClient.java:234)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)

at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)

at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)

at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)

at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)

at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)

at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)

at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)

at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)

at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)

at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)

at org.junit.runners.ParentRunner.run(ParentRunner.java:309)

at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)

at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)

at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)

at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)

at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)

at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)

at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)

at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)



[INFO]

[INFO] Results:

[INFO]

[ERROR] Errors:

[ERROR]   
TestFailureHandlingByClient.testMultiBlockWritesWithIntermittentDnFailures:234->validateData:246
 » StorageContainer
{code}

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>

[jira] [Commented] (HDFS-12861) Track speed in DFSClient

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711852#comment-16711852
 ] 

Erik Krogen commented on HDFS-12861:


Hi [~mf_borge], are you still planning on working on this? It looks like very 
good work and I would be happy to help with reviews.

> Track speed in DFSClient
> 
>
> Key: HDFS-12861
> URL: https://issues.apache.org/jira/browse/HDFS-12861
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: María Fernanda Borge
>Priority: Major
> Attachments: HDFS-12861-10-april-18.patch
>
>
> Sometimes we get slow jobs because of the access to HDFS. However, is hard to 
> tell what is the actual speed. We propose to add a log line with something 
> like:
> {code}
> 2017-11-19 09:55:26,309 INFO [main] hdfs.DFSClient: blk_1107222019_38144502 
> READ 129500B in 7ms 17.6MB/s
> 2017-11-27 19:01:04,141 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792057_86833357 WRITE 131072B in 10ms 12.5MB/s
> 2017-11-27 19:01:14,219 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792069_86833369 WRITE 131072B in 12ms 10.4MB/s
> 2017-11-27 19:01:24,282 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792081_86833381 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:34,330 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792093_86833393 WRITE 131072B in 11ms 11.4MB/s
> 2017-11-27 19:01:44,408 INFO [DataStreamer for file 
> /hdfs-federation/stats/2017/11/27/151183800.json] hdfs.DFSClient: 
> blk_1135792105_86833405 WRITE 131072B in 11ms 11.4MB/s
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-12-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711847#comment-16711847
 ] 

Anu Engineer commented on HDDS-858:
---

+1, thanks for the contribution.

 

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-858.002.patch, HDDS-858.003.patch, 
> HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-06 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711811#comment-16711811
 ] 

Jitendra Nath Pandey commented on HDDS-870:
---

+1, I will commit shortly.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch, 
> HDDS-870.008.patch, HDDS-870.009.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14109) Improve hdfs auditlog format and support federation friendly

2018-12-06 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711810#comment-16711810
 ] 

Kihwal Lee commented on HDFS-14109:
---

This has less to do with federation itself. Rather, it is more about the way 
audit logs are collected and processed from multiple namenodes. People deal 
with logs from multiple systems today without having to insert the source 
identity in every single log line. 

> Improve hdfs auditlog format and support federation friendly
> 
>
> Key: HDFS-14109
> URL: https://issues.apache.org/jira/browse/HDFS-14109
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14109.patch
>
>
> The following auditlog format does not well meet requirement for federation 
> arch currently. Since some case we need to aggregate all namespace audit log, 
> if there are some common path request(e.g. /tmp, /user/ etc. some path may 
> not appear in mountTable, but the path is very real), we will have no idea to 
> split them that which namespace it request to. So I propose add column 
> {{nsid}} to support federation more friendly.  
> {quote}2018-11-27 13:20:30,028 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs/hostn...@realm.com (auth:KERBEROS)  ip=/10.1.1.2 cmd=getfileinfo 
> src=/path   dst=null        perm=null       proto=rpc       clientName=null
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-06 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711802#comment-16711802
 ] 

Shashikant Banerjee commented on HDDS-870:
--

Test Failures reported are not related to the patch. TestKeys tests are running 
on a standalone pipeline which needs to be changed to single node ratis 
Pipeline.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch, 
> HDDS-870.008.patch, HDDS-870.009.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711794#comment-16711794
 ] 

Íñigo Goiri commented on HDFS-13443:


Thanks [~arshad.mohammad] for tackling my comments.
Two minor nits:
* {{MountTableRefresherService#refresh()}}, could use {{else if}} in line 
216/217.
* {{MountTableRefresherService#refresh()}} could define the {{RouterClient}} 
directly in the line where is used as it is not used outside the try.

The unit tests seem to run fine:
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/25723/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterAdminCLI/
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/25723/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterMountTableCacheRefresh/

+1

Anybody else available to take a look?

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch, HDFS-13443.001.patch, HDFS-13443.002.patch, 
> HDFS-13443.003.patch, HDFS-13443.004.patch, HDFS-13443.005.patch, 
> HDFS-13443.006.patch, HDFS-13443.007.patch, HDFS-13443.008.patch, 
> HDFS-13443.009.patch, HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-12-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711797#comment-16711797
 ] 

Íñigo Goiri commented on HDFS-12943:


Is there a JIRA tracking the documentation/user guide?
I think we should be able to push that fairly fast.

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-12-06 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711791#comment-16711791
 ] 

Shweta commented on HDFS-14081:
---

Hi [~kihwal],

Can you please review the patch and commit it upstream?

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch, HDFS-14081.004.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14129) RBF: Create new policy provider for router

2018-12-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14129:
---
Description: 
Router is using *{{HDFSPolicyProvider}}*. We can't add new protocol in this 
class for router, its better to create in policy provider for Router.
{code:java}
// Set service-level authorization security policy
if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
}
{code}
I got this issue when I am verified HDFS-14079 with secure cluster.
{noformat}
./bin/hdfs dfsrouteradmin -ls /
ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
{noformat}

  was:
Router is using *{{HDFSPolicyProvider}}*. We can't add new ptotocol in this 
class for router, its better to create in policy provider for Router.
{code:java}
// Set service-level authorization security policy
if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
}
{code}
I got this issue when I am verified HDFS-14079 with secure cluster.
{noformat}
./bin/hdfs dfsrouteradmin -ls /
ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
not known.
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
{noformat}


> RBF: Create new policy provider for router
> --
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new protocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14129) RBF: Create new policy provider for router

2018-12-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14129:
---
Summary: RBF: Create new policy provider for router  (was: RBF : Create new 
policy provider for router)

> RBF: Create new policy provider for router
> --
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new ptotocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14109) Improve hdfs auditlog format and support federation friendly

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711766#comment-16711766
 ] 

Erik Krogen commented on HDFS-14109:


I think as with most recent additions to the audit log, it should be protected 
by a config which defaults to off. In particular, in an environment using only 
a single namespace, we definitely don't want this information, and an 
installation may already have some way of adding this information back at a 
later time without the NameNode having to write it out on every single audit 
entry.

> Improve hdfs auditlog format and support federation friendly
> 
>
> Key: HDFS-14109
> URL: https://issues.apache.org/jira/browse/HDFS-14109
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14109.patch
>
>
> The following auditlog format does not well meet requirement for federation 
> arch currently. Since some case we need to aggregate all namespace audit log, 
> if there are some common path request(e.g. /tmp, /user/ etc. some path may 
> not appear in mountTable, but the path is very real), we will have no idea to 
> split them that which namespace it request to. So I propose add column 
> {{nsid}} to support federation more friendly.  
> {quote}2018-11-27 13:20:30,028 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs/hostn...@realm.com (auth:KERBEROS)  ip=/10.1.1.2 cmd=getfileinfo 
> src=/path   dst=null        perm=null       proto=rpc       clientName=null
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711768#comment-16711768
 ] 

Hadoop QA commented on HDDS-870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} root: The patch generated 1 new + 8 unchanged - 
0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 13s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
48s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950866/HDDS-870.009.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux b101c1a5f6ab 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / c03024a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1887/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1887/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1887/testReport/ |
| Max. process+thread count | 1085 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-ozone/client hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1887/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement

[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711760#comment-16711760
 ] 

Erik Krogen commented on HDFS-14084:


[~pranay_singh] - thanks for taking this on, this is really valuable work. 
Regarding:
{quote}
the problem with StorageStatistics is that it just records the frequency of a 
particular operation and not it's latency
{quote}
Ideally this would be great time to _extend_ {{StorageStatistics}} to support 
latency as well as frequency. This is work I am particularly interested in and 
was planning on tackling soon, I would love to collaborate. 
{{StorageStatistics}} has had some great work done on it so far, and it isn't 
quite where it needs to be yet, but the only way it will get there is by us 
continuing to make it better. Going down a parallel path that doesn't have 
long-term and/or shareable benefits isn't good in the long run.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13999) Bogus missing block warning if the file is under construction when NN starts

2018-12-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711732#comment-16711732
 ] 

Erik Krogen commented on HDFS-13999:


Thanks for fixing this [~jojochuang]! This has plagued us for a long time. It's 
really great to hear that Dynamometer was useful for this!

> Bogus missing block warning if the file is under construction when NN starts
> 
>
> Key: HDFS-13999
> URL: https://issues.apache.org/jira/browse/HDFS-13999
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.7.8
>
> Attachments: HDFS-13999.branch-2.7.001.patch, webui missing blocks.png
>
>
> We found an interesting case where web UI displays a few missing blocks, but 
> it doesn't state which files are corrupt. What'll also happen is that fsck 
> states the file system is healthy. This bug is similar to HDFS-10827 and 
> HDFS-8533. 
>  (See the attachment for an example)
> Using Dynamometer, I was able to reproduce the bug, and realized the the 
> "missing" blocks are actually healthy, but somehow neededReplications doesn't 
> get updated when NN receives block reports. What's more interesting is that 
> the files associated with the "missing" blocks are under construction when NN 
> starts, and so after a while NN prints file recovery log.
> Given that, I determined the following code is the source of bug:
> {code:java|title=BlockManager#addStoredBlock}
> 
>// if file is under construction, then done for now
> if (bc.isUnderConstruction()) {
>   return storedBlock;
> }
> {code}
> which is wrong, because a file may have multiple blocks, and the first block 
> is complete. In which case, the neededReplications structure doesn't get 
> updated for the first block, and thus the missing block warning on the web 
> UI. More appropriately, it should check the state of the block itself, not 
> the file.
> Fortunately, it was unintentionally fixed via HDFS-9754:
> {code:java}
> // if block is still under construction, then done for now
> if (!storedBlock.isCompleteOrCommitted()) {
>   return storedBlock;
> }
> {code}
> We should bring this fix into branch-2.7 too. That said, this is a harmless 
> warning, and should go away after the under-construction-files are recovered, 
> and the NN restarts (or force full block reports).
> Kudos to Dynamometer! It would be impossible to reproduce this bug without 
> the tool. And thanks [~smeng] for helping with the reproduction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-870) Avoid creating block sized buffer in ChunkGroupOutputStream

2018-12-06 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711691#comment-16711691
 ] 

Shashikant Banerjee commented on HDDS-870:
--

Patch v9 address the test failure related to TestFailureHandlingByClient. The 
other test failure is not related.

> Avoid creating block sized buffer in ChunkGroupOutputStream
> ---
>
> Key: HDDS-870
> URL: https://issues.apache.org/jira/browse/HDDS-870
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-870.000.patch, HDDS-870.001.patch, 
> HDDS-870.002.patch, HDDS-870.003.patch, HDDS-870.004.patch, 
> HDDS-870.005.patch, HDDS-870.006.patch, HDDS-870.007.patch, 
> HDDS-870.008.patch, HDDS-870.009.patch
>
>
> Currently, for a key, we create a block size byteBuffer in order for caching 
> data. This can be replaced with an array of buffers of size flush buffer size 
> configured for handling 2 node failures as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >