[jira] [Commented] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750835#comment-16750835
 ] 

Hadoop QA commented on HDFS-14225:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
31s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14225 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956091/HDFS-14225-HDFS-13891.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c571cf2bcc46 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26035/testReport/ |
| Max. process+thread count | 955 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26035/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> ---

[jira] [Updated] (HDDS-993) Update hadoop version to 3.2.0

2019-01-23 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-993:
---
Attachment: (was: HDDS-993.000.patch)

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-993) Update hadoop version to 3.2.0

2019-01-23 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-993:
---
Attachment: HDDS-993.000.patch
Status: Patch Available  (was: Open)

Changes:
 # Changed hadoop version to 3.2.0 in pom.xml of hadoop-ozone and hadoop-hdds
 # Changed docker compose yaml files to make OM wait for SCM to come up. This 
avoids intermittent DNS resolution errors during smoke tests. Using the WAITFOR 
env variable.

Testing: Executed following scripts

./hadoop-ozone/dev-support/checks/build.sh

./hadoop-ozone/dev-support/checks/acceptance.sh

to verify the build.

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch, HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14226) RBF: setErasureCodingPolicy should set all multiple subclusters' directories.

2019-01-23 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14226:
---

 Summary: RBF: setErasureCodingPolicy should set all multiple 
subclusters' directories.
 Key: HDFS-14226
 URL: https://issues.apache.org/jira/browse/HDFS-14226
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma


Only one subcluster is set now.

{noformat}
// create a mount point of multiple subclusters
hdfs dfsrouteradmin -add /all_data ns1 /data1
hdfs dfsrouteradmin -add /all_data ns2 /data2

hdfs ec -Dfs.defaultFS=hdfs://router: -setPolicy -path /all_data -policy 
RS-3-2-1024k
Set RS-3-2-1024k erasure coding policy on /all_data

hdfs ec -Dfs.defaultFS=hdfs://router: -getPolicy -path /all_data
RS-3-2-1024k

hdfs ec -Dfs.defaultFS=hdfs://ns1-namenode:8020 -getPolicy -path /data1
RS-3-2-1024k

hdfs ec -Dfs.defaultFS=hdfs://ns2-namenode:8020 -getPolicy -path /data2
The erasure coding policy of /data2 is unspecified
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-993) Update hadoop version to 3.2.0

2019-01-23 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-993:
---
Attachment: HDDS-993.000.patch

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-993.000.patch
>
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-996) Incorrect data length gets updated in OM by client in case it hits exception in multiple successive block writes

2019-01-23 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750806#comment-16750806
 ] 

Jitendra Nath Pandey commented on HDDS-996:
---

+1

> Incorrect data length gets updated in OM by client in case it hits exception 
> in multiple successive block writes
> 
>
> Key: HDDS-996
> URL: https://issues.apache.org/jira/browse/HDDS-996
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-996.000.patch
>
>
> In the retry path, the data which needs to be written to the next block 
> should always be calculated from the data actually residing in the buffer 
> list rather than the length of the current stream entry allocated. This leads 
> to updating incorrect length of key updated in OM when multiple exceptions 
> occur while doing key writes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-23 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-14223:
---

Assignee: Takanobu Asanuma

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750801#comment-16750801
 ] 

Takanobu Asanuma commented on HDFS-14223:
-

Uploaded the 1st patch.

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-23 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14223:

Status: Patch Available  (was: Open)

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14223) RBF: Add configuration documents for using multiple sub-clusters

2019-01-23 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14223:

Attachment: HDFS-14223-HDFS-13891.1.patch

> RBF: Add configuration documents for using multiple sub-clusters
> 
>
> Key: HDFS-14223
> URL: https://issues.apache.org/jira/browse/HDFS-14223
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14223-HDFS-13891.1.patch
>
>
> When using multiple sub-clusters for a mount point, we need to set 
> {{MultipleDestinationMountTableResolver}} to 
> {{dfs.federation.router.file.resolver.client.class}}. The current documents 
> lack of the explanation. We should add it to HDFSRouterFederation.md and 
> hdfs-rbf-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14225:
-
Attachment: HDFS-14225-HDFS-13891.000.patch

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750784#comment-16750784
 ] 

Hadoop QA commented on HDDS-997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 35s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
0s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
|   | hadoop.ozone.TestSecureOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-997 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956088/HDDS-997.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  |
| uname | Linux 13fdb55c979c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / e321b91 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2094/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2094/testReport/ |
| Max. process+thread count | 1069 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2094/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750781#comment-16750781
 ] 

Surendra Singh Lilhore commented on HDFS-14215:
---

+1 for v4

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750777#comment-16750777
 ] 

Ranith Sardar commented on HDFS-14225:
--

Attached the patch. Please check it once.

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14225:
-
Status: Patch Available  (was: Open)

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14225-HDFS-13891.000.patch
>
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750769#comment-16750769
 ] 

Takanobu Asanuma commented on HDFS-14224:
-

[~ayushtkn] Sorry for the confusion, but please fix the comment like below.

{code:java}
// We return from the first response as we assume that the EC policy
// of each sub-cluster is same.
{code}

The rest of the patch looks good to me.

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750766#comment-16750766
 ] 

Takanobu Asanuma commented on HDFS-14224:
-

bq. We should use the first directory EC policy. In multi directory scenario 
both the directory EC policy should be same.

That's may be right. I'm OK now for using the first element.

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14225:


Assignee: Ranith Sardar

> RBF : MiniRouterDFSCluster should configure the failover proxy provider for 
> namespace
> -
>
> Key: HDFS-14225
> URL: https://issues.apache.org/jira/browse/HDFS-14225
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Minor
>
> Getting UnknownHostException in UT.
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.net.UnknownHostException: ns0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14225) RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace

2019-01-23 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14225:
-

 Summary: RBF : MiniRouterDFSCluster should configure the failover 
proxy provider for namespace
 Key: HDFS-14225
 URL: https://issues.apache.org/jira/browse/HDFS-14225
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: federation
Affects Versions: 3.1.1
Reporter: Surendra Singh Lilhore


Getting UnknownHostException in UT.

{noformat}
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
java.net.UnknownHostException: ns0
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1000) Write a tool to dump DataNode RocksDB in human-readable format

2019-01-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750744#comment-16750744
 ] 

Arpit Agarwal commented on HDDS-1000:
-

You snooze, you lose. :)

> Write a tool to dump DataNode RocksDB in human-readable format
> --
>
> Key: HDDS-1000
> URL: https://issues.apache.org/jira/browse/HDDS-1000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> It would be good to have a command-line tool that can dump the contents of a 
> DataNode RocksDB file in human-readable format e.g. YAML.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1000) Write a tool to dump DataNode RocksDB in human-readable format

2019-01-23 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750743#comment-16750743
 ] 

Anu Engineer commented on HDDS-1000:


I would have grabbed this if I had seen earlier, the cool Jira number :)

 

> Write a tool to dump DataNode RocksDB in human-readable format
> --
>
> Key: HDDS-1000
> URL: https://issues.apache.org/jira/browse/HDDS-1000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> It would be good to have a command-line tool that can dump the contents of a 
> DataNode RocksDB file in human-readable format e.g. YAML.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-23 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-997:

Status: Patch Available  (was: Open)

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-23 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi reassigned HDDS-997:
---

Assignee: Nilotpal Nandi

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-23 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-997:

Attachment: HDDS-997.001.patch

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750723#comment-16750723
 ] 

Hadoop QA commented on HDFS-14224:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m  
6s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956083/HDFS-14224-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3b9097dda0ee 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26034/testReport/ |
| Max. process+thread count | 1029 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26034/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> 

[jira] [Commented] (HDFS-14186) blockreport storm slow down namenode restart seriously in large cluster

2019-01-23 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750718#comment-16750718
 ] 

He Xiaoqiao commented on HDFS-14186:


Thanks [~daryn],[~kihwal] for your help, this issue is based on 2.7.1 and no 
async BR processing, I am just testing Patch HDFS-9198, I will report result 
timely when finish testing.

> blockreport storm slow down namenode restart seriously in large cluster
> ---
>
> Key: HDFS-14186
> URL: https://issues.apache.org/jira/browse/HDFS-14186
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14186.001.patch
>
>
> In the current implementation, the datanode sends blockreport immediately 
> after register to namenode successfully when restart, and the blockreport 
> storm will make namenode high load to process them. One result is some 
> received RPC have to skip because queue time is timeout. If some datanodes' 
> heartbeat RPC are continually skipped for long times (default is 
> heartbeatExpireInterval=630s) it will be set DEAD, then datanode has to 
> re-register and send blockreport again, aggravate blockreport storm and trap 
> in a vicious circle, and slow down (more than one hour and even more) 
> namenode startup seriously in a large (several thousands of datanodes) and 
> busy cluster especially. Although there are many work to optimize namenode 
> startup, the issue still exists. 
> I propose to postpone dead datanode check when namenode have finished startup.
> Any comments and suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14210) RBF: ModifyACL should work over all the destinations

2019-01-23 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750711#comment-16750711
 ] 

Shubham Dewan commented on HDFS-14210:
--

Thanks [~elgoiri] for a quick review.

I tried reuse TestRouterQuota class but faced issue due to MountTableResolver 
as in our Test case scenario we are using multiple destinations for which we 
need MultipleDestinationMountTableResolver .Hence added a new class for clear 
understanding.

> RBF: ModifyACL should work over all the destinations
> 
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891.002.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14186) blockreport storm slow down namenode restart seriously in large cluster

2019-01-23 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14186:
---
Affects Version/s: 2.7.1

> blockreport storm slow down namenode restart seriously in large cluster
> ---
>
> Key: HDFS-14186
> URL: https://issues.apache.org/jira/browse/HDFS-14186
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14186.001.patch
>
>
> In the current implementation, the datanode sends blockreport immediately 
> after register to namenode successfully when restart, and the blockreport 
> storm will make namenode high load to process them. One result is some 
> received RPC have to skip because queue time is timeout. If some datanodes' 
> heartbeat RPC are continually skipped for long times (default is 
> heartbeatExpireInterval=630s) it will be set DEAD, then datanode has to 
> re-register and send blockreport again, aggravate blockreport storm and trap 
> in a vicious circle, and slow down (more than one hour and even more) 
> namenode startup seriously in a large (several thousands of datanodes) and 
> busy cluster especially. Although there are many work to optimize namenode 
> startup, the issue still exists. 
> I propose to postpone dead datanode check when namenode have finished startup.
> Any comments and suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750706#comment-16750706
 ] 

Hadoop QA commented on HDFS-14215:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
15s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14215 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956080/HDFS-14215-HDFS-13891-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bf45a64e6285 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26033/testReport/ |
| Max. process+thread count | 963 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26033/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> 

[jira] [Commented] (HDDS-980) Adding getOMCertificate in SCMSecurityProtocol

2019-01-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750693#comment-16750693
 ] 

Hudson commented on HDDS-980:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15816 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15816/])
HDDS-980. Adding getOMCertificate in SCMSecurityProtocol. Contributed by (xyao: 
rev e321b91cb5b22542d0fe80cd03ea3f59fac8e569)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
* (edit) hadoop-hdds/common/src/main/proto/hdds.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/utils/CertificateCodec.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateApprover.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolServerSideTranslatorPB.java
* (edit) hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java


> Adding getOMCertificate in SCMSecurityProtocol 
> ---
>
> Key: HDDS-980
> URL: https://issues.apache.org/jira/browse/HDDS-980
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-980-HDDS-4.01.patch, HDDS-980-HDDS-4.02.patch, 
> HDDS-980-HDDS-4.03.patch, HDDS-980-HDDS-4.04.patch, HDDS-980-HDDs-4.00.patch, 
> HDDS-980.05.patch
>
>
> Add new api in SCMSecurityProtocol for OM to request a SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-980) Adding getOMCertificate in SCMSecurityProtocol

2019-01-23 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-980:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've commit the patch to trunk.

> Adding getOMCertificate in SCMSecurityProtocol 
> ---
>
> Key: HDDS-980
> URL: https://issues.apache.org/jira/browse/HDDS-980
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-980-HDDS-4.01.patch, HDDS-980-HDDS-4.02.patch, 
> HDDS-980-HDDS-4.03.patch, HDDS-980-HDDS-4.04.patch, HDDS-980-HDDs-4.00.patch, 
> HDDS-980.05.patch
>
>
> Add new api in SCMSecurityProtocol for OM to request a SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14224:

Attachment: HDFS-14224-HDFS-13891-03.patch

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-980) Adding getOMCertificate in SCMSecurityProtocol

2019-01-23 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-980:

Summary: Adding getOMCertificate in SCMSecurityProtocol   (was: add new api 
for OM in SCMSecurityProtocol )

> Adding getOMCertificate in SCMSecurityProtocol 
> ---
>
> Key: HDDS-980
> URL: https://issues.apache.org/jira/browse/HDDS-980
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-980-HDDS-4.01.patch, HDDS-980-HDDS-4.02.patch, 
> HDDS-980-HDDS-4.03.patch, HDDS-980-HDDS-4.04.patch, HDDS-980-HDDs-4.00.patch, 
> HDDS-980.05.patch
>
>
> Add new api in SCMSecurityProtocol for OM to request a SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750671#comment-16750671
 ] 

Ayush Saxena commented on HDFS-14224:
-

Handled comments as part of v3.
Pls Review!!!

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch, HDFS-14224-HDFS-13891-03.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750665#comment-16750665
 ] 

Ayush Saxena commented on HDFS-14215:
-

Uploaded v4 with said changes.
Pls Review!!!

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14215:

Attachment: HDFS-14215-HDFS-13891-04.patch

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch, 
> HDFS-14215-HDFS-13891-04.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750652#comment-16750652
 ] 

Hadoop QA commented on HDFS-14215:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14215 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956067/HDFS-14215-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a03ead519a62 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26032/testReport/ |
| Max. process+thread count | 1373 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26032/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -

[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750640#comment-16750640
 ] 

Íñigo Goiri commented on HDFS-14224:


Can we also add this unit test to {{TestRouterRpc}} too?
For {{testGetContentSummaryEc()}}, can we extract the path and the EC policy 
(for this last one, call it expectedECPolicy?).
Can we also leave the filesystem as is, and just make it DFS for this test 
(routerDFS)?

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750637#comment-16750637
 ] 

Íñigo Goiri commented on HDFS-14215:


Thanks for  [^HDFS-14215-HDFS-13891-03.patch]. Minor nits:
* Check {{isEmpty()}} instead of {{size() == 0}}.
* Extract the first element of the iterator for readability.

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750638#comment-16750638
 ] 

Íñigo Goiri commented on HDFS-14215:


Thanks for  [^HDFS-14215-HDFS-13891-03.patch]. Minor nits:
* Check {{isEmpty()}} instead of {{size() == 0}}.
* Extract the first element of the iterator for readability.

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14210) RBF: ModifyACL should work over all the destinations

2019-01-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750631#comment-16750631
 ] 

Íñigo Goiri commented on HDFS-14210:


Thanks [~shubham.dewan] for the patch.
Do we need a new test with a new MiniDFSCluster?
Can we reuse TestRouterRPC or one of those?

> RBF: ModifyACL should work over all the destinations
> 
>
> Key: HDFS-14210
> URL: https://issues.apache.org/jira/browse/HDFS-14210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14210-HDFS-13891.002.patch, HDFS-14210.001.patch
>
>
> 1) A mount point with multiple destinations.
> 2) ./bin/hdfs dfs -setfacl -m user:abc:rwx /testacl
> 3) where /testacl => /test1, /test2
> 4) command works for only one destination.
> ACL should be set on both of the destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750624#comment-16750624
 ] 

Hadoop QA commented on HDFS-14224:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
16s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956061/HDFS-14224-HDFS-13891-02.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 13b43924c13d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 7fe0b06 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26031/testReport/ |
| Max. process+thread count | 1358 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26031/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> -

[jira] [Resolved] (HDDS-1001) Switch Hadoop version to 3.2.0

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1001.
-
Resolution: Duplicate

Whoops, [~sdeka] just pointed out this dups HDDS-993.

> Switch Hadoop version to 3.2.0
> --
>
> Key: HDDS-1001
> URL: https://issues.apache.org/jira/browse/HDDS-1001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> Now that Hadoop 3.2.0 is released, we should be able to switch the 
> hadoop.version from 3.2.1-SNAPSHOT to 3.2.0.
> We should run all unit/acceptance tests manually once after making the 
> change. Not sure Jenkins will do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750620#comment-16750620
 ] 

Yiqun Lin commented on HDDS-989:


Thanks for addressing the comments, [~arpitagarwal]! Looks great now, only two 
nits:
 * The log instance got incorrectly in {{TestHddsVolumeChecker}}.
 * It will be better to clean up the dirs created in 
{{TestVolumeSetDiskChecks#testOzoneDirsAreCreated}}.

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1001) Switch Hadoop version to 3.2.0

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1001:
---

Assignee: Supratim Deka

> Switch Hadoop version to 3.2.0
> --
>
> Key: HDDS-1001
> URL: https://issues.apache.org/jira/browse/HDDS-1001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> Now that Hadoop 3.2.0 is released, we should be able to switch the 
> hadoop.version from 3.2.1-SNAPSHOT to 3.2.0.
> We should run all unit/acceptance tests manually once after making the 
> change. Not sure Jenkins will do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1001) Switch Hadoop version to 3.2.0

2019-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1001:
---

 Summary: Switch Hadoop version to 3.2.0
 Key: HDDS-1001
 URL: https://issues.apache.org/jira/browse/HDDS-1001
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


Now that Hadoop 3.2.0 is released, we should be able to switch the 
hadoop.version from 3.2.1-SNAPSHOT to 3.2.0.

We should run all unit/acceptance tests manually once after making the change. 
Not sure Jenkins will do that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14215) RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability of Default NS

2019-01-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14215:

Attachment: HDFS-14215-HDFS-13891-03.patch

> RBF: GetServerDefaults and GetStoragePolicies are dependent on Availability 
> of Default NS
> -
>
> Key: HDFS-14215
> URL: https://issues.apache.org/jira/browse/HDFS-14215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14215-HDFS-13891-01.patch, 
> HDFS-14215-HDFS-13891-02.patch, HDFS-14215-HDFS-13891-03.patch
>
>
> GetServerDefaults and GetStoragePolicies fetches from Default NS.Thus is 
> dependent on the availability of Default NS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14224) RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest

2019-01-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14224:

Attachment: HDFS-14224-HDFS-13891-02.patch

> RBF: NPE in GetContentSummary For GetEcPolicy in Case of Multi Dest
> ---
>
> Key: HDFS-14224
> URL: https://issues.apache.org/jira/browse/HDFS-14224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14224-HDFS-13891-01.patch, 
> HDFS-14224-HDFS-13891-02.patch
>
>
> Null Pointer Exception in GetContentSummary for EC policy when there are 
> multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-01-23 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750577#comment-16750577
 ] 

CR Hota commented on HDFS-13972:


[~brahmareddy] Thanks for the follow-up. I am getting some of our 2019 planning 
done along with deployment of secure rbf in a datacenter for testing.
Will surely focus back on all these tickets in a week.

[~elgoiri] FYI ..

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14185) Cleanup method calls to static Assert methods in TestAddStripedBlocks

2019-01-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750574#comment-16750574
 ] 

Hudson commented on HDFS-14185:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15815 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15815/])
HDFS-14185. Cleanup method calls to static Assert methods in (templedf: rev 
f3e642d92bcc4ca4a6f88ad0ca04eeeda2f2f529)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java


> Cleanup method calls to static Assert methods in TestAddStripedBlocks
> -
>
> Key: HDFS-14185
> URL: https://issues.apache.org/jira/browse/HDFS-14185
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14185.001.patch, HDFS-14185.002.patch, 
> HDFS-14185.003.patch, HDFS-14185.004.patch
>
>
> Cleanup code in TestAddStripedBlock to cleanup method calls to static Assert 
> methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2019-01-23 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750568#comment-16750568
 ] 

Wei-Chiu Chuang commented on HDFS-14081:


Hi [~shwetayakkali] looks like {{TestDFSAdminWithHA#testMetaSave}} failed after 
the standby NN throws StandbyException. DFSAdmin errors out if any of the 
NameNode throws IOException (StandbyException included). We should update 
DFSAdmin to tolerate the StandbyException. But even with the additional change, 
old client will not work against newer NameNode.

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch, 
> HDFS-14081.003.patch, HDFS-14081.004.patch, HDFS-14081.005.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750566#comment-16750566
 ] 

Hudson commented on HDDS-975:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15814 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15814/])
HDDS-975. Manage ozone security tokens with ozone shell cli. Contributed (ajay: 
rev dcbc8b86ed238d7eb28aba12f9b56becdd6e7c96)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/GetTokenHandler.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/PrintTokenHandler.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/TokenCommands.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/RenewTokenHandler.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/token/CancelTokenHandler.java


> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, Screen Shot 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14185) Cleanup method calls to static Assert methods in TestAddStripedBlocks

2019-01-23 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14185:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks, [~shwetayakkali].  Committed to trunk.

> Cleanup method calls to static Assert methods in TestAddStripedBlocks
> -
>
> Key: HDFS-14185
> URL: https://issues.apache.org/jira/browse/HDFS-14185
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14185.001.patch, HDFS-14185.002.patch, 
> HDFS-14185.003.patch, HDFS-14185.004.patch
>
>
> Cleanup code in TestAddStripedBlock to cleanup method calls to static Assert 
> methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-453) om and scm should use piccoli to parse arguments

2019-01-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750525#comment-16750525
 ] 

Arpit Agarwal commented on HDDS-453:


Awesome, thanks for the tip [~rem...@yahoo.com].

> om and scm should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-453) om and scm should use piccoli to parse arguments

2019-01-23 Thread Remko Popma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750522#comment-16750522
 ] 

Remko Popma commented on HDDS-453:
--

Picocli allows you to have both if desired:
You can declare an option as 

{code}
@Option(names = {"-init", "--init"}, description = "...")
String init;
{code}

It is even possible to make the {{-init}} option a _hidden_ option so it does 
not appear in the usage help message but the parser will recognize the option 
to smoothen the transition and avoid backwards compatibility issues.

> om and scm should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-761) Create S3 subcommand to run S3 related operations

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-761:

Target Version/s: 0.4.0

> Create S3 subcommand to run S3 related operations
> -
>
> Key: HDDS-761
> URL: https://issues.apache.org/jira/browse/HDDS-761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is added to create S3 subcommand, which will be used for all S3 
> related operations.
> Under this jira, move the command ozone sh bucket <> to ozone s3 
> bucket <>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-980) add new api for OM in SCMSecurityProtocol

2019-01-23 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750501#comment-16750501
 ] 

Xiaoyu Yao commented on HDDS-980:
-

Thanks [~ajayydv] for the update. +1 v5 patch. 

> add new api for OM in SCMSecurityProtocol 
> --
>
> Key: HDDS-980
> URL: https://issues.apache.org/jira/browse/HDDS-980
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-980-HDDS-4.01.patch, HDDS-980-HDDS-4.02.patch, 
> HDDS-980-HDDS-4.03.patch, HDDS-980-HDDS-4.04.patch, HDDS-980-HDDs-4.00.patch, 
> HDDS-980.05.patch
>
>
> Add new api in SCMSecurityProtocol for OM to request a SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-761) Create S3 subcommand to run S3 related operations

2019-01-23 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750500#comment-16750500
 ] 

Bharat Viswanadham commented on HDDS-761:
-

In this Jira, I will take care of moving bucket <> under s3, as 
this was one of the comment by [~elek] on HDDS-683 jira which has got this 
feature.

> Create S3 subcommand to run S3 related operations
> -
>
> Key: HDDS-761
> URL: https://issues.apache.org/jira/browse/HDDS-761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is added to create S3 subcommand, which will be used for all S3 
> related operations.
> Under this jira, move the command ozone sh bucket <> to ozone s3 
> bucket <>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14061) Check if the cluster topology supports the EC policy before setting, enabling or adding it

2019-01-23 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14061:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~knanasi] for contributing the patch!

> Check if the cluster topology supports the EC policy before setting, enabling 
> or adding it
> --
>
> Key: HDFS-14061
> URL: https://issues.apache.org/jira/browse/HDFS-14061
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14061.001.patch, HDFS-14061.002.patch, 
> HDFS-14061.003.patch, HDFS-14061.004.patch, HDFS-14061.005.patch
>
>
> HDFS-12946 introduces a command for verifying if there are enough racks and 
> datanodes for the enabled erasure coding policies.
> This verification could be executed for the erasure coding policy before 
> enabling, setting or adding it and a warning message could be written if the 
> verify fails, or the policy setting could be even failed in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14061) Check if the cluster topology supports the EC policy before setting, enabling or adding it

2019-01-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750496#comment-16750496
 ] 

Hudson commented on HDFS-14061:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15813 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15813/])
HDFS-14061. Check if the cluster topology supports the EC policy before 
(weichiu: rev 951cdd7e4cbe68284620f6805f85c51301150c58)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/ECTopologyVerifier.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestECAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Check if the cluster topology supports the EC policy before setting, enabling 
> or adding it
> --
>
> Key: HDFS-14061
> URL: https://issues.apache.org/jira/browse/HDFS-14061
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14061.001.patch, HDFS-14061.002.patch, 
> HDFS-14061.003.patch, HDFS-14061.004.patch, HDFS-14061.005.patch
>
>
> HDFS-12946 introduces a command for verifying if there are enough racks and 
> datanodes for the enabled erasure coding policies.
> This verification could be executed for the erasure coding policy before 
> enabling, setting or adding it and a warning message could be written if the 
> verify fails, or the policy setting could be even failed in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14061) Check if the cluster topology supports the EC policy before setting, enabling or adding it

2019-01-23 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750485#comment-16750485
 ] 

Wei-Chiu Chuang commented on HDFS-14061:


+1 on the 005 patch. Will commit shortly.

> Check if the cluster topology supports the EC policy before setting, enabling 
> or adding it
> --
>
> Key: HDFS-14061
> URL: https://issues.apache.org/jira/browse/HDFS-14061
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-14061.001.patch, HDFS-14061.002.patch, 
> HDFS-14061.003.patch, HDFS-14061.004.patch, HDFS-14061.005.patch
>
>
> HDFS-12946 introduces a command for verifying if there are enough racks and 
> datanodes for the enabled erasure coding policies.
> This verification could be executed for the erasure coding policy before 
> enabling, setting or adding it and a warning message could be written if the 
> verify fails, or the policy setting could be even failed in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750480#comment-16750480
 ] 

Hadoop QA commented on HDDS-991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-ozone: The patch generated 0 new + 5 
unchanged - 7 fixed = 5 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 17s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
22s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956037/HDDS-991.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 7a8211254798 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 0b91329 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2091/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2091/testReport/ |
| Max. process+thread count |  (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager U: hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2091/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian

[jira] [Updated] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-975:

Attachment: (was: HDDS-975.04.patch)

> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, Screen Shot 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750484#comment-16750484
 ] 

Ajay Kumar commented on HDDS-975:
-

deleted v4 as v3 applies to trunk, will commit it shortly.

> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, Screen Shot 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750478#comment-16750478
 ] 

Hadoop QA commented on HDDS-987:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDDS-987 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-987 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956044/HDDS-987.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2093/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-987:

Attachment: HDDS-987.01.patch

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-987:

Target Version/s: 0.4.0

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-23 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750475#comment-16750475
 ] 

Bharat Viswanadham commented on HDDS-987:
-

This patch needs to be applied on top of HDDS-948 and HDDS-956.

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-987:

Status: Patch Available  (was: In Progress)

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750472#comment-16750472
 ] 

Hadoop QA commented on HDDS-975:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-975 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956041/HDDS-975.04.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2092/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, HDDS-975.04.patch, Screen Shot 
> 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-975:

Attachment: HDDS-975.04.patch

> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, HDDS-975.04.patch, Screen Shot 
> 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750469#comment-16750469
 ] 

Ajay Kumar commented on HDDS-975:
-

patch v4 to rebase with trunk as v3 doesn't apply anymore.

> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, HDDS-975.04.patch, Screen Shot 
> 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-975) Manage ozone security tokens with ozone shell cli

2019-01-23 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750446#comment-16750446
 ] 

Xiaoyu Yao commented on HDDS-975:
-

Thanks [~ajayydv] for the update. +1 for v3 trunk patch.

> Manage ozone security tokens with ozone shell cli
> -
>
> Key: HDDS-975
> URL: https://issues.apache.org/jira/browse/HDDS-975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-975-HDDS-4.00.patch, HDDS-975-HDDS-4.01.patch, 
> HDDS-975.02.patch, HDDS-975.03.patch, Screen Shot 2019-01-10 at 7.27.21 PM.png
>
>
> Create ozone cli commands for ozone shell



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-991) Fix failures in TestSecureOzoneCluster

2019-01-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-991:

Attachment: HDDS-991.01.patch

> Fix failures in TestSecureOzoneCluster
> --
>
> Key: HDDS-991
> URL: https://issues.apache.org/jira/browse/HDDS-991
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-991.00.patch, HDDS-991.01.patch
>
>
> Fix failures in TestSecureOzoneCluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1000) Write a tool to dump DataNode RocksDB in human-readable format

2019-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-1000:
---

 Summary: Write a tool to dump DataNode RocksDB in human-readable 
format
 Key: HDDS-1000
 URL: https://issues.apache.org/jira/browse/HDDS-1000
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Arpit Agarwal


It would be good to have a command-line tool that can dump the contents of a 
DataNode RocksDB file in human-readable format e.g. YAML.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1000) Write a tool to dump DataNode RocksDB in human-readable format

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1000:
---

Assignee: Supratim Deka

> Write a tool to dump DataNode RocksDB in human-readable format
> --
>
> Key: HDDS-1000
> URL: https://issues.apache.org/jira/browse/HDDS-1000
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>
> It would be good to have a command-line tool that can dump the contents of a 
> DataNode RocksDB file in human-readable format e.g. YAML.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750390#comment-16750390
 ] 

Hudson commented on HDDS-764:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15812 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15812/])
HDDS-764. Run S3 smoke tests with replication STANDARD. (#462) (bharat: rev 
0b91329ed67dbe5545f9f63576de0dd7a0fbe5f4)
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectdelete.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectputget.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/test.sh
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectmultidelete.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot


> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-764:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank You [~elek] for the contribution.
I have committed this to trunk.

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-764:

Fix Version/s: 0.4.0

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=189155&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-189155
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 23/Jan/19 19:37
Start Date: 23/Jan/19 19:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 189155)
Time Spent: 2h  (was: 1h 50m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-764) Run S3 smoke tests with replication STANDARD.

2019-01-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-764?focusedWorklogId=189151&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-189151
 ]

ASF GitHub Bot logged work on HDDS-764:
---

Author: ASF GitHub Bot
Created on: 23/Jan/19 19:30
Start Date: 23/Jan/19 19:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #462: 
HDDS-764. Run S3 smoke tests with replication STANDARD.
URL: https://github.com/apache/hadoop/pull/462#discussion_r250340628
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -24,6 +23,41 @@ mkdir -p "$DIR/$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$DIR/$RESULT_DIR"
 
+## @description wait until 3 datanodes are up (or 30 seconds)
+## @param the docker-compose file
+wait_for_datanodes(){
+
+  #Reset the timer
+  SECONDS=0
+
+  #Don't give it up until 30 seconds
 
 Review comment:
   Thank You @elek  for clarification.
   Overall it looks good to me.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 189151)
Time Spent: 1h 50m  (was: 1h 40m)

> Run S3 smoke tests with replication STANDARD.
> -
>
> Key: HDDS-764
> URL: https://issues.apache.org/jira/browse/HDDS-764
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: HDDS-764.001.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is created from the comment from [~elek]
> 1. I think sooner or later we need to run ozone tests with real replication. 
> We can add a 'scale up' to the hadoop-ozone/dist/src/main/smoketest/test.sh
> {code:java}
> docker-compose -f "$COMPOSE_FILE" down
> docker-compose -f "$COMPOSE_FILE" up -d
> docker-compose -f "$COMPOSE_FILE" scale datanode=3
> {code}
> And with this modification we don't need the '--storage-class 
> REDUCED_REDUNDANCY'. (But we can do it in separated jira)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-649) Parallel test execution is broken

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-649:
--

Assignee: (was: Supratim Deka)

> Parallel test execution is broken
> -
>
> Key: HDDS-649
> URL: https://issues.apache.org/jira/browse/HDDS-649
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-649.01.patch
>
>
> Parallel tests (with mvn test -Pparallel-tests) give unpredictable results 
> likely because surefire is parallelizing test cases within a class.
> Looks like surefire has options to parallelize at the class-level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-23 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750322#comment-16750322
 ] 

Chao Sun commented on HDFS-14118:
-

I think overall it makes sense to have the resolver pluggable so other 
implementations can be added in future.

A couple of comments on the patch v1:
1. Is it possible that we can implement the resolving logic inside 
{{getProxyAddresses}}? this can then be reused by other classes that inherit 
{{AbstractNNFailoverProxyProvider}}, such as {{ObserverReadProxyProvider}}.
2. Can the config be a class name, just like 
{{dfs.client.failover.proxy.provider}}? otherwise in order to add a new 
implementation, people have to change HDFS code as well (as opposed to just 
load the provided class).

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14118.001.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-907) Use WAITFOR environment variable to handle dependencies between ozone containers

2019-01-23 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-907 started by Doroszlai, Attila.
--
> Use WAITFOR environment variable to handle dependencies between ozone 
> containers
> 
>
> Key: HDDS-907
> URL: https://issues.apache.org/jira/browse/HDDS-907
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: newbie
>
> Until HDDS-839 we had a hard-coded 15 seconds sleep before we started 
> ozoneManager with the docker-compose files 
> (hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose).
> For initialization of the OzoneManager we need the scm. Om will retry the 
> connection if scm is not available but the dns resolution is cached: if the 
> dns of scm is not available at the startup of om, it can't be initialized.
> Before HDDS-839 we handled this dependency with the 15 seconds sleep, which 
> was usually slower what we need.
> Now we can use the WAITFOR environment variables from HDDS-839 to handle this 
> dependency (like WAITFOR:scmL9876) which can be added to all the 
> docker-compose files.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-907) Use WAITFOR environment variable to handle dependencies between ozone containers

2019-01-23 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-907:
--

Assignee: Doroszlai, Attila

> Use WAITFOR environment variable to handle dependencies between ozone 
> containers
> 
>
> Key: HDDS-907
> URL: https://issues.apache.org/jira/browse/HDDS-907
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: newbie
>
> Until HDDS-839 we had a hard-coded 15 seconds sleep before we started 
> ozoneManager with the docker-compose files 
> (hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose).
> For initialization of the OzoneManager we need the scm. Om will retry the 
> connection if scm is not available but the dns resolution is cached: if the 
> dns of scm is not available at the startup of om, it can't be initialized.
> Before HDDS-839 we handled this dependency with the 15 seconds sleep, which 
> was usually slower what we need.
> Now we can use the WAITFOR environment variables from HDDS-839 to handle this 
> dependency (like WAITFOR:scmL9876) which can be added to all the 
> docker-compose files.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14186) blockreport storm slow down namenode restart seriously in large cluster

2019-01-23 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750247#comment-16750247
 ] 

Daryn Sharp commented on HDFS-14186:


Kihwal asked me to take a look.  Safe mode logic has historically been very 
brittle so I worry about the unexpected consequences of changing the logic.  I 
think change is very simple.  

BR processing is serialized via queuing for the 
{{BlockManager.BlockReportProcessingThread}}.  The main issue is probably the 
{{enqueue}} method will block when the queue is full.  If the queue offer 
fails, it should throw a {{RetriableException}}.  This will prevent stalling 
the RPC handlers which leads to a vicious cycle of timeout/retries.

> blockreport storm slow down namenode restart seriously in large cluster
> ---
>
> Key: HDFS-14186
> URL: https://issues.apache.org/jira/browse/HDFS-14186
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14186.001.patch
>
>
> In the current implementation, the datanode sends blockreport immediately 
> after register to namenode successfully when restart, and the blockreport 
> storm will make namenode high load to process them. One result is some 
> received RPC have to skip because queue time is timeout. If some datanodes' 
> heartbeat RPC are continually skipped for long times (default is 
> heartbeatExpireInterval=630s) it will be set DEAD, then datanode has to 
> re-register and send blockreport again, aggravate blockreport storm and trap 
> in a vicious circle, and slow down (more than one hour and even more) 
> namenode startup seriously in a large (several thousands of datanodes) and 
> busy cluster especially. Although there are many work to optimize namenode 
> startup, the issue still exists. 
> I propose to postpone dead datanode check when namenode have finished startup.
> Any comments and suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-01-23 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-999:
-

 Summary: Make the DNS resolution in OzoneManager more resilient
 Key: HDDS-999
 URL: https://issues.apache.org/jira/browse/HDDS-999
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Elek, Marton


If the OzoneManager is started before scm the scm dns may not be available. In 
this case the om should retry and re-resolve the dns, but as of now it throws 
an exception:
{code:java}
2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
java.net.SocketException: Call From om-0.om to null:0 failed on socket 
exception: java.net.SocketException: Unresolved address; For more details see:  
http://wiki.apache.org/hadoop/SocketException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
    at org.apache.hadoop.ipc.Server.bind(Server.java:566)
    at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
    at org.apache.hadoop.ipc.Server.(Server.java:2815)
    at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
    at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
    at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
    at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
    at 
org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
    at 
org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
    at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
    at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
    at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
Caused by: java.net.SocketException: Unresolved address
    at sun.nio.ch.Net.translateToSocketException(Net.java:131)
    at sun.nio.ch.Net.translateException(Net.java:157)
    at sun.nio.ch.Net.translateException(Net.java:163)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
    at org.apache.hadoop.ipc.Server.bind(Server.java:549)
    ... 11 more
Caused by: java.nio.channels.UnresolvedAddressException
    at sun.nio.ch.Net.checkAddress(Net.java:101)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    ... 12 more{code}
It should be fixed. (See also HDDS-421 which fixed the same problem in datanode 
side and HDDS-907 which is the workaround while this issue is not resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-649) Parallel test execution is broken

2019-01-23 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-649:
--

Assignee: Supratim Deka  (was: Dinesh Chitlangia)

> Parallel test execution is broken
> -
>
> Key: HDDS-649
> URL: https://issues.apache.org/jira/browse/HDDS-649
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Supratim Deka
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-649.01.patch
>
>
> Parallel tests (with mvn test -Pparallel-tests) give unpredictable results 
> likely because surefire is parallelizing test cases within a class.
> Looks like surefire has options to parallelize at the class-level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-907) Use WAITFOR environment variable to handle dependencies between ozone containers

2019-01-23 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-907:
-

Assignee: (was: Elek, Marton)

> Use WAITFOR environment variable to handle dependencies between ozone 
> containers
> 
>
> Key: HDDS-907
> URL: https://issues.apache.org/jira/browse/HDDS-907
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Elek, Marton
>Priority: Major
>  Labels: newbie
>
> Until HDDS-839 we had a hard-coded 15 seconds sleep before we started 
> ozoneManager with the docker-compose files 
> (hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose).
> For initialization of the OzoneManager we need the scm. Om will retry the 
> connection if scm is not available but the dns resolution is cached: if the 
> dns of scm is not available at the startup of om, it can't be initialized.
> Before HDDS-839 we handled this dependency with the 15 seconds sleep, which 
> was usually slower what we need.
> Now we can use the WAITFOR environment variables from HDDS-839 to handle this 
> dependency (like WAITFOR:scmL9876) which can be added to all the 
> docker-compose files.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750163#comment-16750163
 ] 

Hadoop QA commented on HDDS-989:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
2s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 31s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 10s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestSecureOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955988/HDDS-989.03.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux f020da693511 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 221e308 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2090/artifact/out/patch-mvninstall-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2090/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2090/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2090/testReport/ |
| Max. process+thread count | 1108 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2090/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>

[jira] [Commented] (HDFS-14209) RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable

2019-01-23 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750162#comment-16750162
 ] 

Shubham Dewan commented on HDFS-14209:
--

Thanks [~linyiqun]  for review and Commit and Thanks [~elgoiri]  and 
[~ayushtkn] for the review.

> RBF: setQuota() through router is working for only the mount Points under the 
> Source column in MountTable
> -
>
> Key: HDFS-14209
> URL: https://issues.apache.org/jira/browse/HDFS-14209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14209-HDFS-13891.002.patch, 
> HDFS-14209-HDFS-13891.003.patch, HDFS-14209.001.patch
>
>
> Through router we are only able to setQuota for the directories under the 
> Source column of the mount table.
>  For any other directories apart from mount table entry ==> No remote 
> locations available IOException is thrown.
>  Should be able to setQuota for all the directories if present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-993) Update hadoop version to 3.2.0

2019-01-23 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-993:
---

Assignee: Supratim Deka  (was: Bharat Viswanadham)

> Update hadoop version to 3.2.0
> --
>
> Key: HDDS-993
> URL: https://issues.apache.org/jira/browse/HDDS-993
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Supratim Deka
>Priority: Major
>
> This Jira is to update Hadoop version to 3.2.0 and cleanup related to 
> snapshot repository in ozone module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750120#comment-16750120
 ] 

Arpit Agarwal commented on HDDS-989:


The mvninstall Jenkins failure in the v02 patch looks related. It is likely 
because Jenkins did not pick up the updated Hadoop snapshot dependency.

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750101#comment-16750101
 ] 

Arpit Agarwal edited comment on HDDS-989 at 1/23/19 3:15 PM:
-

Thank you for the review [~linyiqun]. v03 patch addresses your feedback.

bq. Can we rename the name numSyncChecks to numAllVolumesChecks
Fixed.

bq. why not be CountDownLatch(volumes.size())?
The latch is triggered when all volumes have completed checking, so 1 is 
correct here.

bq. Can we print a error log with the exception here?
Fixed.


was (Author: arpitagarwal):
Thank you for the review [~linyiqun].

bq. Can we rename the name numSyncChecks to numAllVolumesChecks
Fixed.

bq. why not be CountDownLatch(volumes.size())?
The latch is triggered when all volumes have completed checking, so 1 is 
correct here.

bq. Can we print a error log with the exception here?
Fixed.

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750101#comment-16750101
 ] 

Arpit Agarwal commented on HDDS-989:


Thank you for the review [~linyiqun].

bq. Can we rename the name numSyncChecks to numAllVolumesChecks
Fixed.

bq. why not be CountDownLatch(volumes.size())?
The latch is triggered when all volumes have completed checking, so 1 is 
correct here.

bq. Can we print a error log with the exception here?
Fixed.

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-998) Remove XceiverClinetSpi Interface in Ozone

2019-01-23 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-998:
-
Description: Currently, Ratis client and StandAlone client implement the 
XceiverClientSpi interface. Functionally, XceiverClientRatis supports and 
requires functionality to handle and update commited log Info of Ratis Servers 
and same needs to be propagated and handled in Ozone clinet, For the standAlone 
client, there is no notion of Raft log Index and these functionality make no 
sense. As Standalone client and Ratis client have diverged functionally, it is 
logical not  to keep a common interface for the two.  (was: Currently, Ratis 
client and StandAlone client implement the XceiverClientSpi interface. 
Functionally, XceiverClientRatis supports and requires functionality to handle 
and update commited log Info of Ratis Servers and same needs to be propagated 
and handled in Ozone clinet, For the standAlone client, there is no notion of 
Raft log Index and these functionality make no sense. As Standalone client and 
Ratis client have diverged functionally, it makes no sense to keep a common 
interface for the two.)

> Remove XceiverClinetSpi Interface in Ozone
> --
>
> Key: HDDS-998
> URL: https://issues.apache.org/jira/browse/HDDS-998
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
>
> Currently, Ratis client and StandAlone client implement the XceiverClientSpi 
> interface. Functionally, XceiverClientRatis supports and requires 
> functionality to handle and update commited log Info of Ratis Servers and 
> same needs to be propagated and handled in Ozone clinet, For the standAlone 
> client, there is no notion of Raft log Index and these functionality make no 
> sense. As Standalone client and Ratis client have diverged functionally, it 
> is logical not  to keep a common interface for the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-989) Check Hdds Volumes for errors

2019-01-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-989:
---
Attachment: HDDS-989.03.patch

> Check Hdds Volumes for errors
> -
>
> Key: HDDS-989
> URL: https://issues.apache.org/jira/browse/HDDS-989
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Attachments: HDDS-989.01.patch, HDDS-989.02.patch, HDDS-989.03.patch
>
>
> HDDS volumes should be checked for errors periodically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-998) Remove XceiverClinetSpi Interface in Ozone

2019-01-23 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-998:


 Summary: Remove XceiverClinetSpi Interface in Ozone
 Key: HDDS-998
 URL: https://issues.apache.org/jira/browse/HDDS-998
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


Currently, Ratis client and StandAlone client implement the XceiverClientSpi 
interface. Functionally, XceiverClientRatis supports and requires functionality 
to handle and update commited log Info of Ratis Servers and same needs to be 
propagated and handled in Ozone clinet, For the standAlone client, there is no 
notion of Raft log Index and these functionality make no sense. As Standalone 
client and Ratis client have diverged functionally, it makes no sense to keep a 
common interface for the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14209) RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable

2019-01-23 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-14209:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: setQuota() through router is working for only the mount Points under the 
> Source column in MountTable
> -
>
> Key: HDFS-14209
> URL: https://issues.apache.org/jira/browse/HDFS-14209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14209-HDFS-13891.002.patch, 
> HDFS-14209-HDFS-13891.003.patch, HDFS-14209.001.patch
>
>
> Through router we are only able to setQuota for the directories under the 
> Source column of the mount table.
>  For any other directories apart from mount table entry ==> No remote 
> locations available IOException is thrown.
>  Should be able to setQuota for all the directories if present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14209) RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable

2019-01-23 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750085#comment-16750085
 ] 

Yiqun Lin commented on HDFS-14209:
--

Committed this to HDFS-13891.
Thanks [~shubham.dewan] for the contribution and thanks [~elgoiri], [~ayushtkn] 
for the reviews!

> RBF: setQuota() through router is working for only the mount Points under the 
> Source column in MountTable
> -
>
> Key: HDFS-14209
> URL: https://issues.apache.org/jira/browse/HDFS-14209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shubham Dewan
>Assignee: Shubham Dewan
>Priority: Major
> Attachments: HDFS-14209-HDFS-13891.002.patch, 
> HDFS-14209-HDFS-13891.003.patch, HDFS-14209.001.patch
>
>
> Through router we are only able to setQuota for the directories under the 
> Source column of the mount table.
>  For any other directories apart from mount table entry ==> No remote 
> locations available IOException is thrown.
>  Should be able to setQuota for all the directories if present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >