[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-27 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631382#comment-16631382
 ] 

LiXin Ge commented on HDDS-401:
---

Thanks [~ajayydv] for reviewing and committing this.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631380#comment-16631380
 ] 

Hadoop QA commented on HDDS-548:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
55s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-hdds/common: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941572/HDDS-548-HDDS-4.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux caefbf456822 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / fe85d51 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1241/artifact/out/diff-checkstyle-hadoop-hdds_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1241/testReport/ |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1241/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Patch Available  (was: Open)

Patch 006 is rebased base on HDDS-401.

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Attachment: HDDS-448.006.patch

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-27 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-448:
--
Status: Open  (was: Patch Available)

> Move NodeStat to NodeStatemanager from SCMNodeManager.
> --
>
> Key: HDDS-448
> URL: https://issues.apache.org/jira/browse/HDDS-448
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-448.000.patch, HDDS-448.001.patch, 
> HDDS-448.002.patch, HDDS-448.003.patch, HDDS-448.004.patch, 
> HDDS-448.005.patch, HDDS-448.006.patch
>
>
> This issue try to make the SCMNodeManager clear and clean, as the stat 
> information should be kept by NodeStatemanager (NodeStateMap). It's also 
> described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631374#comment-16631374
 ] 

Hadoop QA commented on HDFS-13945:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 57 unchanged - 0 fixed = 63 total (was 57) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941624/HDFS-13945-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2ec31c6c8f1e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5c8d907 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25157/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25157/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25157/testReport/ |
| Max. process+thread 

[jira] [Commented] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread chencan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631371#comment-16631371
 ] 

chencan commented on HDDS-289:
--

Thanks for your review [~hanishakoneru], I have changed and uploaded the v4 
patch. failing test is  not related.

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch, 
> HDDS-289.003.patch, HDDS-289.004.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631356#comment-16631356
 ] 

Hadoop QA commented on HDDS-289:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-ozone: The patch generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 33s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941632/HDDS-289.004.patch |
| Optional Tests |  

[jira] [Commented] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631351#comment-16631351
 ] 

Hadoop QA commented on HDDS-548:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
36s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdds/common: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-548 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941572/HDDS-548-HDDS-4.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6f24e14fa649 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / fe85d51 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1239/artifact/out/diff-checkstyle-hadoop-hdds_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1239/testReport/ |
| Max. process+thread count | 304 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1239/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: venkata ram kumar ch

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: venkata ram kumar ch
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: (was: venkata ram kumar ch)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-09-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631336#comment-16631336
 ] 

Anu Engineer commented on HDDS-560:
---

a quick question, why are we catching Exception in EndPointBase.java instead of 
IOException?

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13944:
---

Assignee: (was: venkata ram kumar ch)

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13939) [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13939:
---

Assignee: venkata ram kumar ch

> [JDK10] Javadoc build fails on JDK 10 in hadoop-hdfs-project
> 
>
> Key: HDFS-13939
> URL: https://issues.apache.org/jira/browse/HDFS-13939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: venkata ram kumar ch
>Priority: Major
>
> There are many javadoc errors on JDK 10 in hadoop-hdfs-project. Let's fix 
> them per project or module.
>  * hadoop-hdfs-project/hadoop-hdfs: 212 errors
>  * hadoop-hdfs-project/hadoop-hdfs-client: 85 errors
>  * hadoop-hdfs-project/hadoop-hdfs-rbf: 34 errors
> We can confirm the errors by below command.
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-hdfs-project/hadoop-hdfs-client
> {noformat}
> See also: HADOOP-15785



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread venkata ram kumar ch (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

venkata ram kumar ch reassigned HDFS-13944:
---

Assignee: venkata ram kumar ch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: venkata ram kumar ch
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-525) Support virtual-hosted style URLs

2018-09-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631335#comment-16631335
 ] 

Anu Engineer commented on HDDS-525:
---

[~bharatviswa] Thanks for the update, I have some minor comments on this patch.
# Should we add s3Config is a generic config and add also to ozone-site.xml?
# Not part of this patch, but shouldn't we add S3GatewayConfig to our standard 
config tests? that is if we add key without the corresponding entry in 
ozone-site.xml, we should get a failure? What do you think?
# VirtualHostStyleFilter#getDomainName() --
Suppose I have two domains, "a.b.com" and "b.com", I am worried that it is 
possible for us to match "b.com" and return that as the domain even if the 
request domain was "a.b.com." I am wondering if we should match the domain with 
the longest match. In most cases this would not be a problem, just something 
that occurred to me while reading code.
# VirtualHostStyleFilter#Line 84: Should we add a check if volume and bucket 
are not blank or null, something like isNotBlank from Strings.
# It might be a good idea to write more some tests for example,
 ## Case where there is no key - something like the URI in create bucket.
 ## Case where there is a malformed URI -- for example , we have host but no 
bucket and volume.
 ## Case where we have host, but only volume and no bucket.
 ## Case where we have a host HTTP header, but no domain attached to it.

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631323#comment-16631323
 ] 

Hadoop QA commented on HDFS-13938:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
2s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13938 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941626/HDFS-13938.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 18f743affcc5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5c8d907 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25158/testReport/ |
| Max. process+thread count | 644 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25158/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: 

[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631315#comment-16631315
 ] 

Ajay Kumar commented on HDFS-13941:
---

https://builds.apache.org/job/PreCommit-HDFS-Build/25160/

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch
>
>
>  change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
> audience we should add a overloaded function to allow Hadoop 2 clients to 
> work with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-289:
-
Status: Patch Available  (was: Open)

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch, 
> HDDS-289.003.patch, HDDS-289.004.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-289:
-
Status: Open  (was: Patch Available)

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch, 
> HDDS-289.003.patch, HDDS-289.004.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-289:
-
Attachment: HDDS-289.004.patch

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch, 
> HDDS-289.003.patch, HDDS-289.004.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-09-27 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13768:
-
Attachment: HDFS-13768.07.patch

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rather than a sync way. This will be a great improvement because it can 
> greatly speed up recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-09-27 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631305#comment-16631305
 ] 

Yiqun Lin commented on HDFS-13768:
--

Thanks [~surendrasingh] for sharing the data. The improvement looks great!
Jenkins is fine now, I'd like to attach the same patch of v06 patch to 
re-trigger this.
+1 from me. I will fold off the commit a couple of days in case [~arpitagarwal] 
or other guys have comments for this.


>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.07.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use 

[jira] [Commented] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631292#comment-16631292
 ] 

Yiqun Lin commented on HDFS-13938:
--

Attach the same patch again, I think Jenkins is fine now, :).

> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: https://issues.apache.org/jira/browse/HDFS-13938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Attachments: HDFS-13938.001.patch, HDFS-13938.001.patch, 
> HDFS-13938.002.patch
>
>
> In BaseTestHttpFSWith:
> {code:java}
> case DISALLOW_SNAPSHOT:
>   testDisallowSnapshot();
>   break;
> case DISALLOW_SNAPSHOT_EXCEPTION:
>   testDisallowSnapshotException();
>   // Missed a "break" here.
> case FILE_STATUS_ATTR:
>   testFileStatusAttr();
>   break;
> {code}
> The missing "break" won't cause any bugs though. Just the fact that 
> testFileStatusAttr() will be run a second time after 
> testDisallowSnapshotException() finishes. :P



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13938:
-
Attachment: HDFS-13938.002.patch

> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: https://issues.apache.org/jira/browse/HDFS-13938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Attachments: HDFS-13938.001.patch, HDFS-13938.001.patch, 
> HDFS-13938.002.patch
>
>
> In BaseTestHttpFSWith:
> {code:java}
> case DISALLOW_SNAPSHOT:
>   testDisallowSnapshot();
>   break;
> case DISALLOW_SNAPSHOT_EXCEPTION:
>   testDisallowSnapshotException();
>   // Missed a "break" here.
> case FILE_STATUS_ATTR:
>   testFileStatusAttr();
>   break;
> {code}
> The missing "break" won't cause any bugs though. Just the fact that 
> testFileStatusAttr() will be run a second time after 
> testDisallowSnapshotException() finishes. :P



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13945:

Attachment: HDFS-13945-02.patch

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch, HDFS-13945-02.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
> tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
> at 
> 

[jira] [Updated] (HDDS-547) Fix secure docker and configs

2018-09-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-547:

Attachment: HDDS-547-HDDS-4.004.patch

> Fix secure docker and configs
> -
>
> Key: HDDS-547
> URL: https://issues.apache.org/jira/browse/HDDS-547
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-547-HDDS-4.001.patch, HDDS-547-HDDS-4.002.patch, 
> HDDS-547-HDDS-4.003.patch, HDDS-547-HDDS-4.004.patch
>
>
> This is to provide a workable secure docker after recent trunk rebase for 
> dev/test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-547) Fix secure docker and configs

2018-09-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631175#comment-16631175
 ] 

Xiaoyu Yao edited comment on HDDS-547 at 9/28/18 1:27 AM:
--

patch v4 fixed Jenkins issue on shellcheck and pylint checks.


was (Author: xyao):
patch v3 fixed Jenkins issue on shellcheck and pylint checks.

> Fix secure docker and configs
> -
>
> Key: HDDS-547
> URL: https://issues.apache.org/jira/browse/HDDS-547
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-547-HDDS-4.001.patch, HDDS-547-HDDS-4.002.patch, 
> HDDS-547-HDDS-4.003.patch, HDDS-547-HDDS-4.004.patch
>
>
> This is to provide a workable secure docker after recent trunk rebase for 
> dev/test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630831#comment-16630831
 ] 

Anu Engineer edited comment on HDDS-434 at 9/28/18 12:51 AM:
-

Posted a design doc for community feedback and to share the development plan 
with community.


was (Author: anu):
Post a design doc for community feedback and to share the development plan with 
community.

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: S3Gateway.pdf
>
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-551) Fix the close container status check in CloseContainerCommandHandler

2018-09-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631192#comment-16631192
 ] 

Hudson commented on HDDS-551:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15070 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15070/])
HDDS-551. Fix the close container status check in (hanishakoneru: rev 
2a5d4315bfe13c12cacc7718537077bf9abb22e2)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java


> Fix the close container status check in CloseContainerCommandHandler
> 
>
> Key: HDDS-551
> URL: https://issues.apache.org/jira/browse/HDDS-551
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-551.000.patch
>
>
> If the container is already closed while retrying to close the container in a 
> Datanode which is not a leader, we just log the info and still submit the 
> close request to Ratis. Ideally, this check should be moved to 
> CloseContainerCommandhandler and we should just return without submitting any 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-520) Implement HeadBucket REST endpoint

2018-09-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631180#comment-16631180
 ] 

Bharat Viswanadham commented on HDDS-520:
-

This patch is dependant on HDDS-560.

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Status: Patch Available  (was: In Progress)

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-520) Implement HeadBucket REST endpoint

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-520:

Attachment: HDDS-520.00.patch

> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-520.00.patch
>
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-547) Fix secure docker and configs

2018-09-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631175#comment-16631175
 ] 

Xiaoyu Yao commented on HDDS-547:
-

patch v3 fixed Jenkins issue on shellcheck and pylint checks.

> Fix secure docker and configs
> -
>
> Key: HDDS-547
> URL: https://issues.apache.org/jira/browse/HDDS-547
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-547-HDDS-4.001.patch, HDDS-547-HDDS-4.002.patch, 
> HDDS-547-HDDS-4.003.patch
>
>
> This is to provide a workable secure docker after recent trunk rebase for 
> dev/test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-547) Fix secure docker and configs

2018-09-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-547:

Attachment: HDDS-547-HDDS-4.003.patch

> Fix secure docker and configs
> -
>
> Key: HDDS-547
> URL: https://issues.apache.org/jira/browse/HDDS-547
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-547-HDDS-4.001.patch, HDDS-547-HDDS-4.002.patch, 
> HDDS-547-HDDS-4.003.patch
>
>
> This is to provide a workable secure docker after recent trunk rebase for 
> dev/test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-557) DeadNodeHandler should handle exception from removeContainerHandler api

2018-09-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-557:

Attachment: HDDS-557.00.patch

> DeadNodeHandler should handle exception from removeContainerHandler api
> ---
>
> Key: HDDS-557
> URL: https://issues.apache.org/jira/browse/HDDS-557
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
> Attachments: HDDS-557.00.patch
>
>
> DeadNodeHandler should handle exception from removeContainerHandler api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-551) Fix the close container status check in CloseContainerCommandHandler

2018-09-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-551:

   Resolution: Fixed
Fix Version/s: 0.2.2
   Status: Resolved  (was: Patch Available)

> Fix the close container status check in CloseContainerCommandHandler
> 
>
> Key: HDDS-551
> URL: https://issues.apache.org/jira/browse/HDDS-551
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.2
>
> Attachments: HDDS-551.000.patch
>
>
> If the container is already closed while retrying to close the container in a 
> Datanode which is not a leader, we just log the info and still submit the 
> close request to Ratis. Ideally, this check should be moved to 
> CloseContainerCommandhandler and we should just return without submitting any 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-551) Fix the close container status check in CloseContainerCommandHandler

2018-09-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631158#comment-16631158
 ] 

Hanisha Koneru commented on HDDS-551:
-

Committed to trunk. Thanks for the contribution [~shashikant].

> Fix the close container status check in CloseContainerCommandHandler
> 
>
> Key: HDDS-551
> URL: https://issues.apache.org/jira/browse/HDDS-551
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-551.000.patch
>
>
> If the container is already closed while retrying to close the container in a 
> Datanode which is not a leader, we just log the info and still submit the 
> close request to Ratis. Ideally, this check should be moved to 
> CloseContainerCommandhandler and we should just return without submitting any 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-557) DeadNodeHandler should handle exception from removeContainerHandler api

2018-09-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-557:

Assignee: Ajay Kumar
  Status: Patch Available  (was: Open)

> DeadNodeHandler should handle exception from removeContainerHandler api
> ---
>
> Key: HDDS-557
> URL: https://issues.apache.org/jira/browse/HDDS-557
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-557.00.patch
>
>
> DeadNodeHandler should handle exception from removeContainerHandler api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-551) Fix the close container status check in CloseContainerCommandHandler

2018-09-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631154#comment-16631154
 ] 

Hanisha Koneru commented on HDDS-551:
-

Thanks [~shashikant] for working on this. 

LGTM. +1

(There is one checkstyle issue - line longer than 80, in 
CloseContainerCommandHandler#83. I will fix it while committing).

> Fix the close container status check in CloseContainerCommandHandler
> 
>
> Key: HDDS-551
> URL: https://issues.apache.org/jira/browse/HDDS-551
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-551.000.patch
>
>
> If the container is already closed while retrying to close the container in a 
> Datanode which is not a leader, we just log the info and still submit the 
> close request to Ratis. Ideally, this check should be moved to 
> CloseContainerCommandhandler and we should just return without submitting any 
> request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8196) Erasure Coding related information on NameNode UI

2018-09-27 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631139#comment-16631139
 ] 

Kitti Nanasi commented on HDFS-8196:


Thanks for the comments [~tasanuma0829], I agree with them and fixed them in 
patch v007.

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, HDFS-8196.02.patch, 
> HDFS-8196.03.patch, HDFS-8196.04.patch, HDFS-8196.05.patch, 
> HDFS-8196.06.patch, HDFS-8196.07.patch, Screen Shot 2017-02-06 at 
> 22.30.40.png, Screen Shot 2017-02-12 at 20.21.42.png, Screen Shot 2017-02-14 
> at 22.43.57.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8196) Erasure Coding related information on NameNode UI

2018-09-27 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-8196:
---
Attachment: HDFS-8196.07.patch

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, HDFS-8196.02.patch, 
> HDFS-8196.03.patch, HDFS-8196.04.patch, HDFS-8196.05.patch, 
> HDFS-8196.06.patch, HDFS-8196.07.patch, Screen Shot 2017-02-06 at 
> 22.30.40.png, Screen Shot 2017-02-12 at 20.21.42.png, Screen Shot 2017-02-14 
> at 22.43.57.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631095#comment-16631095
 ] 

Ajay Kumar commented on HDDS-548:
-

https://builds.apache.org/job/PreCommit-HDDS-Build/1233/

> Create a Self-Signed Certificate
> 
>
> Key: HDDS-548
> URL: https://issues.apache.org/jira/browse/HDDS-548
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-548-HDDS-4.001.patch, HDDS-548-HDDS-4.002.patch, 
> HDDS-548-HDDS-4.003.patch, HDDS-548-HDDS-4.004.patch, 
> HDDS-548-HDDS-4.005.patch, HDDS-548-HDDS-4.006.patch, HDDS-548.001.patch
>
>
> This Jira proposes to create a class that can create a self-signed 
> certificate that can help with testing and  can also act as a CA. This is 
> needed to bootstrap SCM in the absence of a user provided CA certificate and 
> is also needed for testing.
> cc: [~ajayydv], [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-520) Implement HeadBucket REST endpoint

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-520 started by Bharat Viswanadham.
---
> Implement HeadBucket REST endpoint
> --
>
> Key: HDDS-520
> URL: https://issues.apache.org/jira/browse/HDDS-520
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This operation is useful to determine if a bucket exists and you have 
> permission to access it. The operation returns a 200 OK if the bucket exists 
> and you have permission to access it. Otherwise, the operation might return 
> responses such as 404 Not Found and 403 Forbidden.  
> See the reference here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-09-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13877:
--
Attachment: HDFS-13877.001.patch
Status: Patch Available  (was: In Progress)

> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13877.001.patch
>
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631047#comment-16631047
 ] 

Íñigo Goiri commented on HDFS-13945:


Thanks for the explanation.
Did this ever pass then? Can you link the JIRAs that introduced the issue?

For the new InstanceStorageDir approach, we should make it into a function and 
potentially in one of the utils classes.

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
> tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> 

[jira] [Commented] (HDDS-289) While creating bucket everything after '/' is ignored without any warning

2018-09-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631044#comment-16631044
 ] 

Hanisha Koneru commented on HDDS-289:
-

Thanks for working on this [~candychencan].
 Patch LGTM overall. A few comments:
 # PutKey should allow "/" in the key name. We can create keys using {{ozone 
fs}} (and they can have "/" in the keyname). So {{ozone sh key}} should also 
allow keys with a "/" in them.
 # The error message "Path ... too long in ..." is ambiguous. Can we expand it 
to say something like "Invalid bucket name.Delimiters ("/") not allowed in 
bucket name"
 # A minor NIT: Most of the handlers already have a check with respect to 
path.getNameCount(). We could probably optimize by combining them. Something 
like below:
{code:java}
int pathNameCount = path.getNameCount();
if (pathNameCount != 2) {
  String errorMessage;
  if (pathNameCount < 2) {
errorMessage = "volume and bucket name required in createBucket";
  } else {
errorMessage = "invalid bucket name. Delimiters (/) not allowed in " +
"bucket name";
  }
  throw new OzoneClientException(errorMessage); 
}
{code}

> While creating bucket everything after '/' is ignored without any warning
> -
>
> Key: HDDS-289
> URL: https://issues.apache.org/jira/browse/HDDS-289
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Namit Maheshwari
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-289.001.patch, HDDS-289.002.patch, 
> HDDS-289.003.patch
>
>
> Please see below example. Here the user issues command to create bucket like 
> below. Where /namit is the volume. 
> {code}
> hadoop@288c0999be17:~$ ozone oz -createBucket /namit/hjk/fgh
> 2018-07-24 00:30:52 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-07-24 00:30:52 INFO  RpcClient:337 - Creating Bucket: namit/hjk, with 
> Versioning false and Storage Type set to DISK
> {code}
> As seen above it just ignored '/fgh'
> There should be a Warning / Error message instead of just ignoring everything 
> after a '/' 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630948#comment-16630948
 ] 

Ajay Kumar commented on HDFS-13941:
---

triggered pre-build manually. 
[https://builds.apache.org/job/PreCommit-HDFS-Build/25154/] 

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch
>
>
>  change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
> audience we should add a overloaded function to allow Hadoop 2 clients to 
> work with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630941#comment-16630941
 ] 

Ajay Kumar commented on HDDS-548:
-

+1 pending jenkins.

> Create a Self-Signed Certificate
> 
>
> Key: HDDS-548
> URL: https://issues.apache.org/jira/browse/HDDS-548
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-548-HDDS-4.001.patch, HDDS-548-HDDS-4.002.patch, 
> HDDS-548-HDDS-4.003.patch, HDDS-548-HDDS-4.004.patch, 
> HDDS-548-HDDS-4.005.patch, HDDS-548-HDDS-4.006.patch, HDDS-548.001.patch
>
>
> This Jira proposes to create a class that can create a self-signed 
> certificate that can help with testing and  can also act as a CA. This is 
> needed to bootstrap SCM in the absence of a user provided CA certificate and 
> is also needed for testing.
> cc: [~ajayydv], [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630935#comment-16630935
 ] 

Ayush Saxena commented on HDFS-13945:
-

Thanx [~elgoiri] for the comment.
 Regarding the overview : IIUC The test was basically aiming that if we fail a 
volume in a datanode containing a replica of block the block would get into the 
state of UnderReplicated and this we were checking whether it is 
underReplicated or not in the assertion.The reason for it failing here if we 
analyze by the logs was the since this file creation was above the assertion 
check.By the time this file is created and we wait for it to get its 3 
replicas.In that time this under replicated gets replicated and this is the 
catch.Now when we check the assertion it says no underReplicated blocks since 
that block got replicated.So to counter this I placed this creation of file 
below this check to eliminate the time to get the block replicated.
 * InstanceStorageDir() works perfect in most cases but I encountered a 
scenario in my local not here in jenkins that that the block was not in storage 
dir that we failed. So to make sure that the volume that we fail is for sure 
the volume that has the replica.I moved to  the alternative used here.
 * File Creation was done earlier too.One Context I thought was it was to make 
the disk report error and get the block reports and identify the block that we 
are aiming as underReplicated.For sure it can be removed.I didn't remove it 
completely just because of one doubt that isn't there any other purpose being 
checked by this file creation like whether we are able add file after volume 
failure or not.Though it seems to be unrelated to the scenario which we are 
checking . If agreed that it has now in the present situation no relation with 
the test we can remove it. :)

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> 

[jira] [Work started] (HDFS-13877) HttpFS: Implement GETSNAPSHOTDIFF

2018-09-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13877 started by Siyao Meng.
-
> HttpFS: Implement GETSNAPSHOTDIFF
> -
>
> Key: HDFS-13877
> URL: https://issues.apache.org/jira/browse/HDFS-13877
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Implement GETSNAPSHOTDIFF (from HDFS-13052) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-09-27 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630882#comment-16630882
 ] 

Bharat Viswanadham commented on HDDS-560:
-

Added generic exception classes and mapper classes.

Added only NO_SUCH_BUCKET to show the usage. remaining exceptions will be 
handled in each rest request implementation.

 

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-560:

Attachment: HDDS-560.00.patch

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-560:

Status: Patch Available  (was: In Progress)

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-560) Create Generic exception class to be used by S3 rest services

2018-09-27 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-560:

Target Version/s: 0.2.2

> Create Generic exception class to be used by S3 rest services
> -
>
> Key: HDDS-560
> URL: https://issues.apache.org/jira/browse/HDDS-560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-560.00.patch
>
>
> Exception class should have the following fields, and structure should be as 
> below:
>  
> {code:java}
> 
> 
> NoSuchKey
> The resource you requested does not exist 
> /mybucket/myfoto.jpg
> 4442587FB7D0A2F9
> 
> {code}
>  
>  
> https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630875#comment-16630875
 ] 

Íñigo Goiri commented on HDFS-13945:


Thanks [~ayushtkn] for opening this.
Can you give an overview of what the fix is?
* Why is {{getInstanceStorageDir()}} not working and you have to use the full 
method?
* Why you create the file later now?

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
> tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> 

[jira] [Commented] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630831#comment-16630831
 ] 

Anu Engineer commented on HDDS-434:
---

Post a design doc for community feedback and to share the development plan with 
community.

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: S3Gateway.pdf
>
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-434:
--
Attachment: S3Gateway.pdf

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: S3Gateway.pdf
>
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630794#comment-16630794
 ] 

Ajay Kumar edited comment on HDFS-13941 at 9/27/18 5:59 PM:


[https://builds.apache.org/job/PreCommit-HDFS-Build/25145/] 


was (Author: ajayydv):
https://builds.apache.org/job/PreCommit-HDDS-Build/1230/

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch
>
>
>  change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
> audience we should add a overloaded function to allow Hadoop 2 clients to 
> work with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-548:
--
Attachment: HDDS-548-HDDS-4.006.patch

> Create a Self-Signed Certificate
> 
>
> Key: HDDS-548
> URL: https://issues.apache.org/jira/browse/HDDS-548
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-548-HDDS-4.001.patch, HDDS-548-HDDS-4.002.patch, 
> HDDS-548-HDDS-4.003.patch, HDDS-548-HDDS-4.004.patch, 
> HDDS-548-HDDS-4.005.patch, HDDS-548-HDDS-4.006.patch, HDDS-548.001.patch
>
>
> This Jira proposes to create a class that can create a self-signed 
> certificate that can help with testing and  can also act as a CA. This is 
> needed to bootstrap SCM in the absence of a user provided CA certificate and 
> is also needed for testing.
> cc: [~ajayydv], [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-27 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630815#comment-16630815
 ] 

Wei-Chiu Chuang commented on HDFS-13916:


On top of Xiaoyu's comment. Perhaps you can remove 
testSyncSnapshotDiffWithWebHdfs1 and testSyncSnapshotDiffWithWebHdfs2, and 
retent just testSyncSnapshotDiffWithWebHdfs3?

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13941) make storageId in BlockPoolTokenSecretManager.checkAccess optional

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630794#comment-16630794
 ] 

Ajay Kumar commented on HDFS-13941:
---

https://builds.apache.org/job/PreCommit-HDDS-Build/1230/

> make storageId in BlockPoolTokenSecretManager.checkAccess optional
> --
>
> Key: HDFS-13941
> URL: https://issues.apache.org/jira/browse/HDFS-13941
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13941.00.patch
>
>
>  change in {{BlockPoolTokenSecretManager.checkAccess}} by 
> [HDDS-9807|https://issues.apache.org/jira/browse/HDFS-9807] breaks backward 
> compatibility for Hadoop 2 clients. Since SecretManager is marked for public 
> audience we should add a overloaded function to allow Hadoop 2 clients to 
> work with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-548) Create a Self-Signed Certificate

2018-09-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630812#comment-16630812
 ] 

Anu Engineer commented on HDDS-548:
---

[~ajayydv] Thanks for the review. Please see specific comments below.
{quote}Since this root certificate is private to this class don't we need 
another api to handle this?
{quote}
Well, The root certificate is used to kick start the CA or a simple stand alone 
system, like a SSL node. We will be creating a CA class that can take a CSR and 
sign the certificates for the users. But that CA will need a self-signed or a 
Root CA certificate provided. This patch is first of many that will create such 
a system, hence the private signatures.
{quote}CertficateException: Typo in classname and constructor.
{quote}
Good catch, Fixed.
{quote}L69: Rename {{privateKeyName}} to {{privateKeyFileName}}?
{quote}
Done
{quote}L70: Rename {{publicKeyName}} to {{pubicKeyFileName?}}
{quote}
Done
{quote}L128: rename api {{getPublicKeyFileName}} to {{getPublicKeyFileName}}?
{quote}
Done
{quote}L138: rename getPrivateKeyName to getPrivateKeyFileName?
{quote}
Done.

 

The patch v6 takes care of all these issues.

 

 

 

 

 

> Create a Self-Signed Certificate
> 
>
> Key: HDDS-548
> URL: https://issues.apache.org/jira/browse/HDDS-548
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-548-HDDS-4.001.patch, HDDS-548-HDDS-4.002.patch, 
> HDDS-548-HDDS-4.003.patch, HDDS-548-HDDS-4.004.patch, 
> HDDS-548-HDDS-4.005.patch, HDDS-548-HDDS-4.006.patch, HDDS-548.001.patch
>
>
> This Jira proposes to create a class that can create a self-signed 
> certificate that can help with testing and  can also act as a CA. This is 
> needed to bootstrap SCM in the absence of a user provided CA certificate and 
> is also needed for testing.
> cc: [~ajayydv], [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630792#comment-16630792
 ] 

Xiaoyu Yao commented on HDFS-13916:
---

Thanks [~renxunsaky] for reporting and posting the patch. Patch v4 looks pretty 
good to me.

I just have a few minor comments:

DistCpSync.java

Line 204: we should check with in case !isRdiff() where the source file system 
might not be webhdfs or hdfs.

{code} 

else if (fs instanceof WebHdfsFileSystem)

{code}

 

Line 262: NIT checkstyle (line linger than 80)

 

TestDistCpSync.java

Line 73-77: NIT: unrelated formatting change can be avoided.

Line 105: same as above, please avoid formatting only change in other places 
too.

Line 163-171: initData()/changeData() refactor is not needed as we have a 
single cluster and we can always initData with dfs.

Line 311/325: NIT: typo: weather -> whether

Line 839/878: can we refactor the common part of 
testSyncSnapshotDiffWithWebHdfs2 and 

testSyncSnapshotDiffWithWebHdfs3 into a testHelper to reduce duplicated code?

 

> Distcp SnapshotDiff not completely implemented for supporting WebHdfs
> -
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: 
> https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility 
> to make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13906) RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands

2018-09-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13906:

Attachment: HDFS-13906-04.patch

> RBF: Add multiple paths for dfsrouteradmin "rm" and "clrquota" commands
> ---
>
> Key: HDFS-13906
> URL: https://issues.apache.org/jira/browse/HDFS-13906
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13906-01.patch, HDFS-13906-02.patch, 
> HDFS-13906-03.patch, HDFS-13906-04.patch
>
>
> Currently we have option to delete only one mount entry at once. 
> If we have multiple mount entries, then it would be difficult for the user to 
> execute the command for N number of times.
> Better If the "rm" and "clrQuota" command supports multiple entries, then It 
> would be easy for the user to provide all the required entries in one single 
> command.
> Namenode is already suporting "rm" and "clrQuota" with multiple destinations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-27 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630767#comment-16630767
 ] 

Erik Krogen commented on HDFS-13791:


Thanks [~vagarychen]!

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch, HDFS-13791-HDFS-12943.004.patch, 
> HDFS-13791-HDFS-12943.005.patch, HDFS-13791-HDFS-12943.006.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630764#comment-16630764
 ] 

Hudson commented on HDDS-401:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15067 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15067/])
HDDS-401. Update storage statistics on dead node. Contributed by LiXin (ajay: 
rev 184544eff8b3d5951e4f04d080fa00de8e1eec95)
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java


> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-27 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630753#comment-16630753
 ] 

Chen Liang commented on HDFS-13791:
---

+1 on v006 patch, I've committed to the feature branch, thanks [~xkrogen] for 
the contribution!

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch, HDFS-13791-HDFS-12943.004.patch, 
> HDFS-13791-HDFS-12943.005.patch, HDFS-13791-HDFS-12943.006.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-27 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13791:
--
   Resolution: Fixed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch, HDFS-13791-HDFS-12943.004.patch, 
> HDFS-13791-HDFS-12943.005.patch, HDFS-13791-HDFS-12943.006.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13052) WebHDFS: Add support for snasphot diff

2018-09-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630743#comment-16630743
 ] 

Xiaoyu Yao commented on HDFS-13052:
---

Thanks [~renxunsaky] for reporting the issue. This ticket adds support of 
snapshotdiff to webhdfs. The distcp is a use case that we need to fix in 
HDFS-13916

> WebHDFS: Add support for snasphot diff
> --
>
> Key: HDFS-13052
> URL: https://issues.apache.org/jira/browse/HDFS-13052
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: snapshot, webhdfs
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HDFS-13052.001.patch, HDFS-13052.002.patch, 
> HDFS-13052.003.patch, HDFS-13052.004.patch, HDFS-13052.005.patch, 
> HDFS-13052.006.patch, HDFS-13052.007.patch
>
>
> This Jira aims to implement snapshot diff operation for webHdfs filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-401) Update storage statistics on dead node

2018-09-27 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-401:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-401) Update storage statistics on dead node

2018-09-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630729#comment-16630729
 ] 

Ajay Kumar commented on HDDS-401:
-

[~GeLiXin] thanks for the contribution. I have committed this to trunk after 
fixing two checkstyle issues.

> Update storage statistics on dead node 
> ---
>
> Key: HDDS-401
> URL: https://issues.apache.org/jira/browse/HDDS-401
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Attachments: HDDS-401.000.patch, HDDS-401.001.patch, 
> HDDS-401.002.patch, HDDS-401.003.patch, HDDS-401.004.patch, HDDS-401.005.patch
>
>
> This is a follow-up Jira for HDDS-369.
> As per [~ajayydv]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
>  on detecting a dead node in the cluster, we should update the storage stats 
> such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13945:

Description: 
The test is failing in trunk since long.

Reference -

[https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

[https://builds.apache.org/job/PreCommit-HDFS-Build/25135/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

[https://builds.apache.org/job/PreCommit-HDFS-Build/25133/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

[https://builds.apache.org/job/PreCommit-HDFS-Build/25104/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

 

Stack Trace -

 

Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server handler 
3 on 34285" daemon prio=5 tid=2646 timed_waiting java.lang.Thread.State: 
TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 
org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) at 
java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
"org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748) 
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63818b03"
 daemon prio=5 tid=2521 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 

[jira] [Updated] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13945:

Attachment: HDFS-13945-01.patch

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
> tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  at java.lang.Thread.run(Thread.java:748) 
> "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63818b03"
>  daemon prio=5 tid=2521 timed_waiting java.lang.Thread.State: TIMED_WAITING 
> at java.lang.Thread.sleep(Native Method) at 
> 

[jira] [Updated] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13945:

Status: Patch Available  (was: Open)

> TestDataNodeVolumeFailure is Flaky
> --
>
> Key: HDFS-13945
> URL: https://issues.apache.org/jira/browse/HDFS-13945
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13945-01.patch
>
>
> The test is failing in trunk since long.
> Reference -
> [https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]
>  
> Stack Trace -
>  
> Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
> 03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
> timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server 
> handler 3 on 34285" daemon prio=5 tid=2646 timed_waiting 
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
> at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
> "org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
> tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> java.lang.Thread.sleep(Native Method) at 
> org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) 
> at java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
> tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
> org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
> org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
> "org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
> prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
> tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
> sun.misc.Unsafe.park(Native Method) at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
>  at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
>  at java.lang.Thread.run(Thread.java:748) 
> "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63818b03"
>  daemon prio=5 tid=2521 timed_waiting java.lang.Thread.State: TIMED_WAITING 
> at java.lang.Thread.sleep(Native Method) at 
> 

[jira] [Created] (HDFS-13945) TestDataNodeVolumeFailure is Flaky

2018-09-27 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-13945:
---

 Summary: TestDataNodeVolumeFailure is Flaky
 Key: HDFS-13945
 URL: https://issues.apache.org/jira/browse/HDFS-13945
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


The test is failing in trunk since long.

Reference -

[https://builds.apache.org/job/PreCommit-HDFS-Build/25140/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailure/testUnderReplicationAfterVolFailure/]

 

Stack Trace -

 

Timed out waiting for condition. Thread diagnostics: Timestamp: 2018-09-26 
03:32:07,162 "IPC Server handler 2 on 33471" daemon prio=5 tid=2931 
timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) "IPC Server handler 
3 on 34285" daemon prio=5 tid=2646 timed_waiting java.lang.Thread.State: 
TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) 
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:288) at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2668) 
"org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d2ee4cd" daemon prio=5 
tid=2633 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 
org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) at 
java.lang.Thread.run(Thread.java:748) "IPC Server Responder" daemon prio=5 
tid=2766 runnable java.lang.Thread.State: RUNNABLE at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:1334) at 
org.apache.hadoop.ipc.Server$Responder.run(Server.java:1317) 
"org.eclipse.jetty.server.session.HashSessionManager@1287fc65Timer" daemon 
prio=5 tid=2492 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) "qtp548667392-2533" daemon prio=5 
tid=2533 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748) 
"org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63818b03"
 daemon prio=5 tid=2521 timed_waiting java.lang.Thread.State: TIMED_WAITING at 
java.lang.Thread.sleep(Native Method) at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4045)
 at java.lang.Thread.run(Thread.java:748) 
"BP-1973654218-172.17.0.2-1537975830395 heartbeating to 
localhost/127.0.0.1:43522" daemon prio=5 tid=2640 timed_waiting 
java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) 
at 
org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158)
 at 

[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13944:
---
Status: Patch Available  (was: Open)

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630710#comment-16630710
 ] 

Íñigo Goiri commented on HDFS-13944:


I did a first pass in  [^HDFS-13944.000.patch], let's see if Yetus calls the 
others.
We need to disable the checks for:
* {{HdfsServerFederationProtos}}
* {{RouterProtocolProtos}}

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13944) [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module

2018-09-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13944:
---
Attachment: HDFS-13944.000.patch

> [JDK10] Fix javadoc errors in hadoop-hdfs-rbf module
> 
>
> Key: HDFS-13944
> URL: https://issues.apache.org/jira/browse/HDFS-13944
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13944.000.patch, javadoc-rbf.log
>
>
> There are 34 errors in hadoop-hdfs-rbf module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630691#comment-16630691
 ] 

Siyao Meng commented on HDFS-13938:
---

[~linyiqun] Okay, done. Thanks!

> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: https://issues.apache.org/jira/browse/HDFS-13938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Attachments: HDFS-13938.001.patch, HDFS-13938.001.patch
>
>
> In BaseTestHttpFSWith:
> {code:java}
> case DISALLOW_SNAPSHOT:
>   testDisallowSnapshot();
>   break;
> case DISALLOW_SNAPSHOT_EXCEPTION:
>   testDisallowSnapshotException();
>   // Missed a "break" here.
> case FILE_STATUS_ATTR:
>   testFileStatusAttr();
>   break;
> {code}
> The missing "break" won't cause any bugs though. Just the fact that 
> testFileStatusAttr() will be run a second time after 
> testDisallowSnapshotException() finishes. :P



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13938:
--
Attachment: HDFS-13938.001.patch
Status: Patch Available  (was: Open)

> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: https://issues.apache.org/jira/browse/HDFS-13938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Attachments: HDFS-13938.001.patch, HDFS-13938.001.patch
>
>
> In BaseTestHttpFSWith:
> {code:java}
> case DISALLOW_SNAPSHOT:
>   testDisallowSnapshot();
>   break;
> case DISALLOW_SNAPSHOT_EXCEPTION:
>   testDisallowSnapshotException();
>   // Missed a "break" here.
> case FILE_STATUS_ATTR:
>   testFileStatusAttr();
>   break;
> {code}
> The missing "break" won't cause any bugs though. Just the fact that 
> testFileStatusAttr() will be run a second time after 
> testDisallowSnapshotException() finishes. :P



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13938) Add a missing "break" in BaseTestHttpFSWith

2018-09-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13938:
--
Status: Open  (was: Patch Available)

> Add a missing "break" in BaseTestHttpFSWith
> ---
>
> Key: HDFS-13938
> URL: https://issues.apache.org/jira/browse/HDFS-13938
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Trivial
> Attachments: HDFS-13938.001.patch, HDFS-13938.001.patch
>
>
> In BaseTestHttpFSWith:
> {code:java}
> case DISALLOW_SNAPSHOT:
>   testDisallowSnapshot();
>   break;
> case DISALLOW_SNAPSHOT_EXCEPTION:
>   testDisallowSnapshotException();
>   // Missed a "break" here.
> case FILE_STATUS_ATTR:
>   testFileStatusAttr();
>   break;
> {code}
> The missing "break" won't cause any bugs though. Just the fact that 
> testFileStatusAttr() will be run a second time after 
> testDisallowSnapshotException() finishes. :P



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-541) ozone volume quota is not honored

2018-09-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-541:
--
Target Version/s: 0.3.0

> ozone volume quota is not honored
> -
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Hanisha Koneru
>Priority: Major
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-525) Support virtual-hosted style URLs

2018-09-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-525:
--
Target Version/s: 0.2.2  (was: 0.3.0)

> Support virtual-hosted style URLs
> -
>
> Key: HDDS-525
> URL: https://issues.apache.org/jira/browse/HDDS-525
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-525.00.patch, HDDS-525.02.patch, HDDS-525.03.patch
>
>
> AWS supports to kind of pattern for the base url of the s3 rest api: 
> virtual-hosted style and path-style.
> Path style: http://s3.us-east-2.amazonaws.com/bucket
> Virtual-hosted style: http://bucket.s3.us-east-2.amazonaws.com
> By default we support the path style method with the volume name in the url:
> http://s3.us-east-2.amazonaws.com/volume/bucket
> Here the endpoint url is http://s3.us-east-2.amazonaws.com/volume/ and the 
> bucket is appended.
> Some of the 3rd party s3 tools (goofys is an example) Supports only the 
> virtual style method. With goofys we can set a custom endpoint 
> (http://localhost:9878) but all the other postfixes after the port are 
> removed.
> It can be solved with using virtual-style url which also could include the 
> volume name:
> http://bucket.volume..com
> The easiest way is to support both of them is implementing a 
> ContainerRequestFilter which can parse the hostname (based on a configuration 
> value) and extend the existing url with adding the missing volume/bucket 
> part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-561:
--
Target Version/s: 0.2.2

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.013.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch, 
> HDDS-325.012.patch, HDDS-325.013.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13890) Allow Delimited PB OIV tool to print out snapshots

2018-09-27 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630579#comment-16630579
 ] 

Adam Antal commented on HDFS-13890:
---

Uploaded patch v1: added the --addSnapshots option to the OIV tool, and if 
given, the processor produces a single folder entry for each snapshot root in 
the delimited output. Uploaded a test to check this.

> Allow Delimited PB OIV tool to print out snapshots
> --
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HDFS-13890.001.patch
>
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-09-27 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630576#comment-16630576
 ] 

Surendra Singh Lilhore commented on HDFS-13768:
---

Thanks [~linyiqun].  



Attached v6 patch and fixed whitespaces warnings.
{quote}BTW, [~surendrasingh], would you mind making a new test based on latest 
patch? I am curious about current rate compared with the data you gave before.
{quote}
After latest path it look 3465ms. its almost 80% faster compare to initial time 
(16772ms).

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> 

[jira] [Updated] (HDFS-13890) Allow Delimited PB OIV tool to print out snapshots

2018-09-27 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HDFS-13890:
--
Status: Patch Available  (was: In Progress)

> Allow Delimited PB OIV tool to print out snapshots
> --
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HDFS-13890.001.patch
>
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13890) Allow Delimited PB OIV tool to print out snapshots

2018-09-27 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HDFS-13890:
--
Attachment: HDFS-13890.001.patch

> Allow Delimited PB OIV tool to print out snapshots
> --
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HDFS-13890.001.patch
>
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13768) Adding replicas to volume map makes DataNode start slowly

2018-09-27 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-13768:
--
Attachment: HDFS-13768.06.patch

>  Adding replicas to volume map makes DataNode start slowly 
> ---
>
> Key: HDFS-13768
> URL: https://issues.apache.org/jira/browse/HDFS-13768
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-13768.01.patch, HDFS-13768.02.patch, 
> HDFS-13768.03.patch, HDFS-13768.04.patch, HDFS-13768.05.patch, 
> HDFS-13768.06.patch, HDFS-13768.patch, screenshot-1.png
>
>
> We find DN starting so slowly when rolling upgrade our cluster. When we 
> restart DNs, the DNs start so slowly and not register to NN immediately. And 
> this cause a lots of following error:
> {noformat}
> DataXceiver error processing WRITE_BLOCK operation  src: /xx.xx.xx.xx:64360 
> dst: /xx.xx.xx.xx:50010
> java.io.IOException: Not ready to serve the block pool, 
> BP-1508644862-xx.xx.xx.xx-1493781183457.
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(DataXceiver.java:1290)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:630)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Looking into the logic of DN startup, it will do the initial block pool 
> operation before the registration. And during initializing block pool 
> operation, we found the adding replicas to volume map is the most expensive 
> operation.  Related log:
> {noformat}
> 2018-07-26 10:46:23,771 INFO [Thread-105] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/1/dfs/dn/current: 242722ms
> 2018-07-26 10:46:26,231 INFO [Thread-109] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/5/dfs/dn/current: 245182ms
> 2018-07-26 10:46:32,146 INFO [Thread-112] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/8/dfs/dn/current: 251097ms
> 2018-07-26 10:47:08,283 INFO [Thread-106] 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to 
> add replicas to map for block pool BP-1508644862-xx.xx.xx.xx-1493781183457 on 
> volume /home/hard_disk/2/dfs/dn/current: 287235ms
> {noformat}
> Currently DN uses independent thread to scan and add replica for each volume, 
> but we still need to wait the slowest thread to finish its work. So the main 
> problem here is that we could make the thread to run faster.
> The jstack we get when DN blocking in the adding replica:
> {noformat}
> "Thread-113" #419 daemon prio=5 os_prio=0 tid=0x7f40879ff000 nid=0x145da 
> runnable [0x7f4043a38000]
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.list(Native Method)
>   at java.io.File.list(File.java:1122)
>   at java.io.File.listFiles(File.java:1207)
>   at org.apache.hadoop.fs.FileUtil.listFiles(FileUtil.java:1165)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:445)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(BlockPoolSlice.java:448)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:342)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:864)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:191)
> {noformat}
> One improvement maybe we can use ForkJoinPool to do this recursive task, 
> rather than a sync way. This will be a great improvement because it can 
> greatly speed up recovery process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Commented] (HDFS-13818) Extend OIV to detect FSImage corruption

2018-09-27 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630373#comment-16630373
 ] 

Adam Antal commented on HDFS-13818:
---

I admit luring might happens, though it has been emphasized both in the command 
line and the Hadoop Guide (Image Viewer markdown file) that the search is not 
exhaustive. We can discuss a better name of the command / what could not cause 
confusion to the users.

Indeed, if one wants to convince himself to the fsimage is not corrupted in any 
way, the best is to pick a NN and load the image. Aside from that, it still has 
several advantages compared to the full NN checking, so I believe the new 
processor fits well for other purposes.

> Extend OIV to detect FSImage corruption
> ---
>
> Key: HDFS-13818
> URL: https://issues.apache.org/jira/browse/HDFS-13818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HDFS-13818.001.patch, HDFS-13818.002.patch, 
> HDFS-13818.003.patch, HDFS-13818.003.patch, HDFS-13818.004.patch, 
> HDFS-13818.005.patch, HDFS-13818.006.patch, 
> OIV_CorruptionDetector_processor.001.pdf, 
> OIV_CorruptionDetector_processor.002.pdf
>
>
> A follow-up Jira for HDFS-13031: an improvement of the OIV is suggested for 
> detecting corruptions like HDFS-13101 in an offline way.
> The reasoning is the following. Apart from a NN startup throwing the error, 
> there is nothing in the customer's hand that could reassure him/her that the 
> FSImages is good or corrupted.
> Although real full checking of the FSImage is only possible by the NN, for 
> stack traces associated with the observed corruption cases the solution of 
> putting up a tertiary NN is a little bit of overkill. The OIV would be a 
> handy choice, already having functionality like loading the fsimage and 
> constructing the folder structure, we just have to add the option of 
> detecting the null INodes. For e.g. the Delimited OIV processor can already 
> use in disk MetadataMap, which reduces memory consumption. Also there may be 
> a window for parallelizing: iterating through INodes for e.g. could be done 
> distributed, increasing efficiency, and we wouldn't need a high mem-high CPU 
> setup for just checking the FSImage.
> The suggestion is to add a --detectCorruption option to the OIV which would 
> check the FSImage for consistency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-27 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630097#comment-16630097
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v12 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch, HDDS-325.012.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.012.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch, HDDS-325.012.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-561:
-
Attachment: HDDS-561.001.patch

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-561:
-
Status: Patch Available  (was: Open)

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-561:


 Summary: Move Node2ContainerMap and Node2PipelineMap to NodeManager
 Key: HDDS-561
 URL: https://issues.apache.org/jira/browse/HDDS-561
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-09-27 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629874#comment-16629874
 ] 

Takanobu Asanuma edited comment on HDFS-13943 at 9/27/18 7:39 AM:
--

Thanks for working on this, [~ajisakaa]! Some comments

*ClientProtocol:*
 It would be better to format the paragraphs more.
{noformat}
Configuration parameters
...
Special cases
{noformat}
*ClientProtocol, DFSUtilClient:*
 For the generics in the javadoc, It would be better to use the escape codes(lt 
and gt) instead of deleting them.

*StripedBlockUtil:*
 IIUC, the ascii arts are for developers who are reading source codes directly. 
So I suggest we leave them as they are. How about using just comments ( /* ... 
\*/ ) instead of javadoc ( /** ... \*/ )?


was (Author: tasanuma0829):
Thanks for working on this, [~ajisakaa]! Some comments

*ClientProtocol:*
 It would be better to format the paragraphs more.
{noformat}
Configuration parameters
...
Special cases
{noformat}
*ClientProtocol, DFSUtilClient:*
 For the generics in the javadoc, It would be better to use the escape codes(lt 
and gt) instead of deleting them.

*StripedBlockUtil:*
 IIUC, the ascii arts are for developers who are reading source codes directly. 
So I suggest we leave them as they are. How about using just comments (/* ... 
*/) instead of javadoc (/** ... */ )?

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13943.01.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-09-27 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629874#comment-16629874
 ] 

Takanobu Asanuma edited comment on HDFS-13943 at 9/27/18 7:38 AM:
--

Thanks for working on this, [~ajisakaa]! Some comments

*ClientProtocol:*
 It would be better to format the paragraphs more.
{noformat}
Configuration parameters
...
Special cases
{noformat}
*ClientProtocol, DFSUtilClient:*
 For the generics in the javadoc, It would be better to use the escape codes(lt 
and gt) instead of deleting them.

*StripedBlockUtil:*
 IIUC, the ascii arts are for developers who are reading source codes directly. 
So I suggest we leave them as they are. How about using just comments (/* ... 
*/) instead of javadoc (/** ... */ )?


was (Author: tasanuma0829):
Thanks for working on this, [~ajisakaa]! Some comments

*ClientProtocol:*
 It would be better to format the paragraphs more.
{noformat}
Configuration parameters
...
Special cases
{noformat}
*ClientProtocol, DFSUtilClient:*
 For the generics in the javadoc, It would be better to use the escape 
codes( and ) instead of deleting them.

*StripedBlockUtil:*
 IIUC, the ascii arts are for developers who are reading source codes directly. 
So I suggest we leave them as they are. How about using just comments (/* ... 
*/) instead of javadoc (/** ... */ )?

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13943.01.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13943) [JDK10] Fix javadoc errors in hadoop-hdfs-client module

2018-09-27 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629874#comment-16629874
 ] 

Takanobu Asanuma commented on HDFS-13943:
-

Thanks for working on this, [~ajisakaa]! Some comments

*ClientProtocol:*
 It would be better to format the paragraphs more.
{noformat}
Configuration parameters
...
Special cases
{noformat}
*ClientProtocol, DFSUtilClient:*
 For the generics in the javadoc, It would be better to use the escape 
codes( and ) instead of deleting them.

*StripedBlockUtil:*
 IIUC, the ascii arts are for developers who are reading source codes directly. 
So I suggest we leave them as they are. How about using just comments (/* ... 
*/) instead of javadoc (/** ... */ )?

> [JDK10] Fix javadoc errors in hadoop-hdfs-client module
> ---
>
> Key: HDFS-13943
> URL: https://issues.apache.org/jira/browse/HDFS-13943
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13943.01.patch
>
>
> There are 85 errors in hadoop-hdfs-client module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >