[jira] [Commented] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599542#comment-16599542
 ] 

Hudson commented on HDDS-392:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14859 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14859/])
HDDS-392. Incomplete description about auditMap#key in AuditLogging (aengineer: 
rev 19abaacdad84b03fc790341b4b5bcf1c4d41f1fb)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/package-info.java


> Incomplete description about auditMap#key in AuditLogging Framework
> ---
>
> Key: HDDS-392
> URL: https://issues.apache.org/jira/browse/HDDS-392
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-392.001.patch
>
>
> Trivial issue where the description about key in auditMap is incomplete and 
> can lead to developers creating invalid audit keys for logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-392:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~dineshchitlangia] Thanks for fixing this issue. I have committed this patch 
to the trunk.

> Incomplete description about auditMap#key in AuditLogging Framework
> ---
>
> Key: HDDS-392
> URL: https://issues.apache.org/jira/browse/HDDS-392
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-392.001.patch
>
>
> Trivial issue where the description about key in auditMap is incomplete and 
> can lead to developers creating invalid audit keys for logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599535#comment-16599535
 ] 

Anu Engineer commented on HDDS-390:
---

Looks like we had no test failure. We should just do that conversation without 
assigning to anything maybe? so that findbugs warning can be avoided ?

> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-390.001.patch
>
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599520#comment-16599520
 ] 

genericqa commented on HDDS-336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m  8s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestOmBlockVersioning |
|   | 

[jira] [Commented] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599504#comment-16599504
 ] 

LiXin Ge commented on HDDS-336:
---

Thanks [~elek] for your reviews and information. I have fixed the 
checkstyle/javadoc issues in 005 patch.

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Patch Available  (was: Open)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Attachment: HDDS-336.005.patch

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-336) Print out container location information for a specific ozone key

2018-08-31 Thread LiXin Ge (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HDDS-336:
--
Status: Open  (was: Patch Available)

> Print out container location information for a specific ozone key 
> --
>
> Key: HDDS-336
> URL: https://issues.apache.org/jira/browse/HDDS-336
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: LiXin Ge
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-336.000.patch, HDDS-336.001.patch, 
> HDDS-336.002.patch, HDDS-336.003.patch, HDDS-336.004.patch, HDDS-336.005.patch
>
>
> In the protobuf protocol we have all the containerid/localid(=blockid) 
> information for a specific ozone key.
> It would be a big help to print out this information to the command line with 
> the ozone cli.
> It requires to improve the REST and RPC interface with additionalOzone 
> KeyLocation information.
> It would help a very big help during the test of the current scm behaviour.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-08-31 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-394:
--

 Summary: Rename *Key Apis in DatanodeContainerProtocol to *Block 
apis
 Key: HDDS-394
 URL: https://issues.apache.org/jira/browse/HDDS-394
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Mukul Kumar Singh


All the block apis in client datanode interaction are named *key apis(e.g. 
PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-31 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599476#comment-16599476
 ] 

Mukul Kumar Singh commented on HDDS-387:


Hi [~hanishakoneru], yes thats the idea I had in my mind. 

Also I feel ozonefs package having a dependency on integration test is fine, as 
it is a test only dependency.
I also feel that we should keep all filesystem tests in ozonefs package and 
object store tests inside integration-test package.

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch, HDDS-387.002.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599451#comment-16599451
 ] 

genericqa commented on HDFS-13838:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937804/HDFS-13838.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e01e50921ba3 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 50d2e3e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-13532) RBF: Adding security

2018-08-31 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599449#comment-16599449
 ] 

CR Hota commented on HDFS-13532:


All,

Have set-up a meeting for everyone to join and discuss the design.

Time - Sep7th 2018, 2-3 PM PST

This is the zoom link, [https://uber.zoom.us/j/372628408]

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, 
> RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based 
> Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-390:
--
Fix Version/s: 0.2.1

> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-390.001.patch
>
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599446#comment-16599446
 ] 

Anu Engineer edited comment on HDDS-379 at 9/1/18 1:18 AM:
---

[~ajayydv] Thanks for the reviews and interesting comments. [~elek] Thanks for 
the contribution. The code looks much better and the output is so much better, 
I would love if ozone and 03 commands also moved to this infrastructure. The 
DM_EXIT ignore fix is not working and it really does not matter. I have 
committed this patch to the trunk.

 


was (Author: anu):
[~ajayydv] Thanks for the reviews and interesting comments. [~elek] Thanks for 
the contribution. The code looks much better and it is so much better, I would 
love if ozone and 03 commands also moved to this infrastructure. The DM_EXIT 
ignore fix is not working and it really does not matter. I have committed this 
patch to the trunk.

 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch, HDDS-379.004.patch, HDDS-379.005.patch, HDDS-379.006.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-379) Simplify and improve the cli arg parsing of ozone scmcli

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-379:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ajayydv] Thanks for the reviews and interesting comments. [~elek] Thanks for 
the contribution. The code looks much better and it is so much better, I would 
love if ozone and 03 commands also moved to this infrastructure. The DM_EXIT 
ignore fix is not working and it really does not matter. I have committed this 
patch to the trunk.

 

> Simplify and improve the cli arg parsing of ozone scmcli
> 
>
> Key: HDDS-379
> URL: https://issues.apache.org/jira/browse/HDDS-379
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-379.001.patch, HDDS-379.002.patch, 
> HDDS-379.003.patch, HDDS-379.004.patch, HDDS-379.005.patch, HDDS-379.006.patch
>
>
> SCMCLI is a useful tool to test SCM. It can create/delete/close/list 
> containers.
> There are multiple problems with the current scmcli.
> The biggest one is the cli argument handling. Similar to HDDS-190, it's often 
> very hard to get the help for a specific subcommand.
> The other one is that a big part of the code is the argument handling which 
> is mixed with the business logic.
> I propose to use a more modern argument handler library and simplify the 
> argument handling (and improve the user experience).
> I propose to use [picocli|https://github.com/remkop/picocli].
> 1.) It supports subcommands and subcommand specific and general arguments.
> 2.) It could work based on annotation with very few additional boilerplate 
> code
> 3.) It's very well documented and easy to use
> 4.) It's licenced under Apache licence
> 5.) It supports tab autocompletion for bash and zsh and colorful output
> 6.) Actively maintainer project
> 7.) Adopter by other bigger projects (groovy, junit, log4j)
> In this patch I would like to demonstrate how the cli handling could be 
> simplified. And if it's accepted, we can start to use similar approach for 
> other ozone cli as well.
> The patch also fixes the cli (the name of the main class was wrong). 
> It also requires HDDS-377 for the be compiled.
> I also deleted the TestSCMCli. It was turned off with an annotation and I 
> believe that this functionality could be tested more easily with a robot test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599438#comment-16599438
 ] 

genericqa commented on HDDS-390:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdds/client generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/client |
|  |  Dead store to ex in 
org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyKeyName(String)  At 
HddsClientUtils.java:org.apache.hadoop.hdds.scm.client.HddsClientUtils.verifyKeyName(String)
  At HddsClientUtils.java:[line 195] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937981/HDDS-390.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fabf3ddb4853 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 

[jira] [Work started] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-08-31 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13820 started by Hrishikesh Gadre.
---
> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599429#comment-16599429
 ] 

Chen Liang commented on HDFS-13880:
---

{quote}LMK if there are other options? I am in favor of _uncoordinated_.
{quote}
I like uncoordinated also, I guess we will officially call these types methods 
the uncoordinated ones from this point :).
{quote}Another thought that write operations 
{quote}
That is a good point, will need to look into how HAAdmin protocol works under 
this context.

 

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599427#comment-16599427
 ] 

Chen Liang commented on HDFS-13880:
---

Thanks for the comments [~xkrogen], [~shv]
{quote}{{@ReadOnly(isMasync = true)}}. This won't help with preventing ...
{quote}
Makes sense, this can be done by setting masync to true by default, will update 
in next patch.

{quote}Also, right now it is only checking the name of the method
{quote}
Not a big fan of checking by method name personally either. But it seems this 
is already what's being used on server side RPC. It is included in RPC header 
and server side ProtobufRPCEngine is actually based on method name to find the 
right call. So I tend to believe that it should be okay to use here. Different 
protocols having same method is neither an issue because rpc header also has 
{{getDeclaringClassProtocolName()}} which gives the protocol name. With this, 
we can look up annotations of method in pretty much any protocol. Will leverage 
this in next patch as well.

I would say the approach being taken in current patch is fine, but I'm totally 
open to see how HDFS-13872 goes.

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599423#comment-16599423
 ] 

Siyao Meng edited comment on HDFS-13886 at 9/1/18 12:17 AM:


[~jojochuang] Yes those two would fail because the test cases also relies on 
HDFS-13838.

Sorry that I forgot to mention this earlier. Linked HDFS-13838 as the blocking 
Jira.


was (Author: smeng):
[~jojochuang] Yes those two would fail because the test cases also relies on 
HDFS-13838.

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599423#comment-16599423
 ] 

Siyao Meng edited comment on HDFS-13886 at 9/1/18 12:15 AM:


[~jojochuang] Yes those two would fail because the test cases also relies on 
HDFS-13838.


was (Author: smeng):
[~jojochuang] Yes those two WOULD fail because the test cases also relies on 
HDFS-13838.

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599421#comment-16599421
 ] 

genericqa commented on HDDS-392:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-392 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937982/HDDS-392.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 248d013f1c7d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 50d2e3e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/927/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/927/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incomplete description about auditMap#key in AuditLogging Framework
> 

[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599423#comment-16599423
 ] 

Siyao Meng commented on HDFS-13886:
---

[~jojochuang] Yes those two WOULD fail because the test cases also relies on 
HDFS-13838.

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599390#comment-16599390
 ] 

Wei-Chiu Chuang commented on HDFS-13886:


Test failures seem related. Would you please check? [~smeng]

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599386#comment-16599386
 ] 

genericqa commented on HDFS-13886:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 52s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
|   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13886 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937838/HDFS-13886.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d5425287656d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 50d2e3e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24931/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24931/testReport/ |
| Max. process+thread count | 634 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 

[jira] [Created] (HDDS-393) Audit Parser tool for processing ozone audit logs

2018-08-31 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-393:
--

 Summary: Audit Parser tool for processing ozone audit logs
 Key: HDDS-393
 URL: https://issues.apache.org/jira/browse/HDDS-393
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Jira to create audit parser tool to process ozone audit logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-392:
---
Fix Version/s: 0.2.1
   Attachment: HDDS-392.001.patch
   Status: Patch Available  (was: In Progress)

[~anu] - Trivial doc fix. Request your review and help to resolve it. Thanks.

> Incomplete description about auditMap#key in AuditLogging Framework
> ---
>
> Key: HDDS-392
> URL: https://issues.apache.org/jira/browse/HDDS-392
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-392.001.patch
>
>
> Trivial issue where the description about key in auditMap is incomplete and 
> can lead to developers creating invalid audit keys for logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-392 started by Dinesh Chitlangia.
--
> Incomplete description about auditMap#key in AuditLogging Framework
> ---
>
> Key: HDDS-392
> URL: https://issues.apache.org/jira/browse/HDDS-392
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
>
> Trivial issue where the description about key in auditMap is incomplete and 
> can lead to developers creating invalid audit keys for logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-392) Incomplete description about auditMap#key in AuditLogging Framework

2018-08-31 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-392:
--

 Summary: Incomplete description about auditMap#key in AuditLogging 
Framework
 Key: HDDS-392
 URL: https://issues.apache.org/jira/browse/HDDS-392
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Trivial issue where the description about key in auditMap is incomplete and can 
lead to developers creating invalid audit keys for logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-390:
---
Attachment: HDDS-390.001.patch
Status: Patch Available  (was: In Progress)

Uploading a sample patch. I am expecting that it will break some integration 
tests. The intention of this draft patch is to reveal the ground to be covered 
:)

> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-390.001.patch
>
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-390 started by Dinesh Chitlangia.
--
> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599349#comment-16599349
 ] 

Wei-Chiu Chuang commented on HDFS-13886:


Triggered precommit build

> HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit
> --
>
> Key: HDFS-13886
> URL: https://issues.apache.org/jira/browse/HDFS-13886
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13886.001.patch, HDFS-13886.002.patch
>
>
> FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. 
> Therefore, "fs.getFileStatus(path).isSnapshotEnabled()" will always return 
> false for fs type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. 
> Additional tests in BaseTestHttpFSWith will be added to prevent this from 
> happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599347#comment-16599347
 ] 

Wei-Chiu Chuang commented on HDFS-13838:


triggered rebuild. +1 pending Jenkins

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-391) Simplify AuditMessage structure to make audit logging easier to use

2018-08-31 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-391:
--

 Summary: Simplify AuditMessage structure to make audit logging 
easier to use
 Key: HDDS-391
 URL: https://issues.apache.org/jira/browse/HDDS-391
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


In HDDS-376 a customer AuditMessage structure was created for use in Audit 
Logging.

This Jira proposes to incorporate suggestive improvements from [~ajayydv].

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13812) Update Docs on Caching - Default Refresh Value

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599329#comment-16599329
 ] 

genericqa commented on HDFS-13812:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13812 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937964/HDFS-13812-001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 979cfac94cfb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 50d2e3e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24929/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Docs on Caching - Default Refresh Value
> --
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-388) Fix the name of the db profile configuration key

2018-08-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599323#comment-16599323
 ] 

Hudson commented on HDDS-388:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14857 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14857/])
HDDS-388. Fix the name of the db profile configuration key. Contributed 
(aengineer: rev 50d2e3ec41c73f9a0198d4a4e3d6f308d3030b8a)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml


> Fix the name of the db profile configuration key
> 
>
> Key: HDDS-388
> URL: https://issues.apache.org/jira/browse/HDDS-388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-388.001.patch
>
>
> HDDS-359 introduced a new configuration for db profiles but at the end the 
> name of the configuration key in ozone-default (ozone.db.profile) is 
> different from the one which is used in the constant (hdds.db.profile). (It's 
> moved to the HddsConfigKeys at the last minute)
> As a result TestOzoneConfigurationFields is failing for precommit tests.
> Uploading the trivial fix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599315#comment-16599315
 ] 

Konstantin Shvachko commented on HDFS-13880:


Following up on that thought. I have the following three candidate terms to 
replace "Masync":
# _uncoordinated_ - meaning that the operation should not be coordinated with 
the sequence of metadata updates, outside of GSI.
# _asynchronous_ - just indicates no msync-wait
# _server-local_ - meaning that it accesses the server local state rather than 
the global metadata state

LMK if there are other options? I am in favor of _uncoordinated_.

Another thought that write operations can also be global and local. Like 
{{transitionToActive}}, {{transitionToObserver}} are write operations, but 
uncoordinated. How do we handle them now with ORPP?

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599311#comment-16599311
 ] 

Erik Krogen commented on HDFS-13791:


[~csun] thanks for looking!

For alignment across logging locations, I considered this as well but decided 
to skip it because I didn't think it was too important and at the time I 
couldn't think of a way to achieve it cleanly. I am now thinking if we could 
pass the previous {{LogAction}} into the subsequent statement and use this to 
align them... I think it should work out pretty cleanly; I'll give it a try.

For your second point, I think there is a question of how complicated we want 
to make this class to achieve generality. I guess my inclination would be to 
only build it out where we actually have a use case and then modify as 
necessary (given it is not public, we can change it anytime). One option would 
be to store the values as a 
[SummaryStatistics|http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math4/stat/descriptive/SummaryStatistics.html]
 and export this so that the caller has full flexibility in what information to 
extract.

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-390:
---
Description: 
As per design, key names composed of all valid characters in URI set must be 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name

 

Valid Key name must:
 * conform to URI Character set
 * must allow /

TBD whether key names must impose other rules similar to volume/bucket names 
like  -
 * should not start with period or dash
 * should not end with period or dash
 * should not have contiguous periods
 * should not have period after dash and vice versa

etc

 

  was:
As per design, key names composed of all valid characters in URI set must be 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name

 

Valid Key name must:
 * conform to URI Character set
 * must allow /

TBD whether key names must impose other rules similar to volume/bucket names 
like  -
 * should not start with period
 * should not have contiguous periods
 * should not have period after dash and vice versa

etc

 


> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-390:
---
Description: 
As per design, key names composed of all valid characters in URI set must be 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name

 

Valid Key name must:
 * conform to URI Character set
 * must allow /

TBD whether key names must impose other rules similar to volume/bucket names 
like  -
 * should not start with period
 * should not have contiguous periods
 * should not have period after dash and vice versa

etc

 

  was:
As per design, key names composed of all valid characters in URI set are 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name

 

Valid Key name must:
 * conform to URI Character set
 * must allow /
 *


> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599306#comment-16599306
 ] 

Hudson commented on HDDS-98:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14856 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14856/])
HDDS-98. Adding Ozone Manager Audit Log. Contributed by Dinesh (aengineer: rev 
630b64ec7e963968a5bdcd1d625fc78746950137)
* (add) hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java
* (edit) hadoop-dist/src/main/compose/ozone/docker-config


> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-08-31 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599303#comment-16599303
 ] 

Siyao Meng commented on HDFS-13876:
---

This patch depends on the bug fix in HDFS-13886. Will submit when HDFS-13886 is 
committed.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-390:
---
Description: 
As per design, key names composed of all valid characters in URI set are 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name

 

Valid Key name must:
 * conform to URI Character set
 * must allow /
 *

  was:
As per design, key names composed of all valid characters in URI set are 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name


> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As per design, key names composed of all valid characters in URI set are 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
>  *



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13885) Improve debugging experience of dfsclient decrypts

2018-08-31 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599301#comment-16599301
 ] 

Xiao Chen commented on HDFS-13885:
--

Sorry I wasn't clear enough.
I was thinking like:
{code}
LOG.debug("...  output stream: 0x{}", Integer.toHexString(dfsos.hashCode())");
{code}
to make the magic number in the log more self explaining.

It would be good to also show in a comment what the logs look like. Running 
related unit tests with DEBUG turned on would do. (I looked at it via 
{{TestSecureEncryptionZoneWithKMS}} and it looks good, but nice to show this on 
jira for reference).

Thanks for working on this, Kitti!

> Improve debugging experience of dfsclient decrypts
> --
>
> Key: HDFS-13885
> URL: https://issues.apache.org/jira/browse/HDFS-13885
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13885.001.patch, HDFS-13885.002.patch
>
>
> We want to know from the hdfs client log (e.g. hbase RS logs) for each 
> CryptoOutputstream, approximately when does the decrypt happen and when does 
> the file read happen, to help us rule out or identify hdfs NN / kms / DN 
> being slow.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599302#comment-16599302
 ] 

Konstantin Shvachko commented on HDFS-13880:


This is an important question:
??why we do the check on the server side vs. client side???
We have global metadata, which is directories, files, blocks and their 
attributes. And we have a Global Sequence Id (GSI), which reflects the 
evolution of the metadata. We also have local state for each servers (NN), 
including HAState, SafeMode, DataNode reports. The source of truth which 
operation accesses the global metadata state or the local server state is 
solely on the server side.
Suppose that we changed the semantics of some state from local to global or 
vice versa. If clients enforces local/global semantics, then we will need to 
update ALL clients in the system for the change to take affect, and we cannot 
enforce it until there are some old clients out there. If this is enforced on 
the server then clients will just follow the new rules.

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599298#comment-16599298
 ] 

Dinesh Chitlangia commented on HDDS-98:
---

[~anu] thank you for committing this.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13812) Update Docs on Caching - Default Refresh Value

2018-08-31 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13812:

Status: Patch Available  (was: Open)

> Update Docs on Caching - Default Refresh Value
> --
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-388) Fix the name of the db profile configuration key

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-388:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elek] Thanks for the contribution. I have committed this to trunk.

> Fix the name of the db profile configuration key
> 
>
> Key: HDDS-388
> URL: https://issues.apache.org/jira/browse/HDDS-388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-388.001.patch
>
>
> HDDS-359 introduced a new configuration for db profiles but at the end the 
> name of the configuration key in ozone-default (ozone.db.profile) is 
> different from the one which is used in the constant (hdds.db.profile). (It's 
> moved to the HddsConfigKeys at the last minute)
> As a result TestOzoneConfigurationFields is failing for precommit tests.
> Uploading the trivial fix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13812) Update Docs on Caching - Default Refresh Value

2018-08-31 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13812:

Attachment: HDFS-13812-001.patch

> Update Docs on Caching - Default Refresh Value
> --
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599297#comment-16599297
 ] 

Dinesh Chitlangia commented on HDDS-337:


[~anu] thanks for your inputs and closing the jira. I have logged HDDS-390 to 
create validateKeyName() implementation.

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:18:14 INFO 

[jira] [Created] (HDDS-390) Add method to check for valid key name based on URI characters

2018-08-31 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-390:
--

 Summary: Add method to check for valid key name based on URI 
characters
 Key: HDDS-390
 URL: https://issues.apache.org/jira/browse/HDDS-390
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


As per design, key names composed of all valid characters in URI set are 
treated as valid key name.

For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]

This Jira proposes to define validateKeyName() similar to 
validateResourceName() that validates bucket/volume name



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13812) Update Docs on Caching - Default Refresh Value

2018-08-31 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre reassigned HDFS-13812:
---

Assignee: Hrishikesh Gadre

> Update Docs on Caching - Default Refresh Value
> --
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-08-31 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13868:
--
Component/s: hdfs

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-388) Fix the name of the db profile configuration key

2018-08-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599291#comment-16599291
 ] 

Anu Engineer commented on HDDS-388:
---

Thank you, appreciate you being irritated enough to fix my stupidity. :). I 
appreciate it. I will commit this now.

> Fix the name of the db profile configuration key
> 
>
> Key: HDDS-388
> URL: https://issues.apache.org/jira/browse/HDDS-388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-388.001.patch
>
>
> HDDS-359 introduced a new configuration for db profiles but at the end the 
> name of the configuration key in ozone-default (ozone.db.profile) is 
> different from the one which is used in the constant (hdds.db.profile). (It's 
> moved to the HddsConfigKeys at the last minute)
> As a result TestOzoneConfigurationFields is failing for precommit tests.
> Uploading the trivial fix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-98) Adding Ozone Manager Audit Log

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-98:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao],[~jnp] Thanks for review. [~dineshchitlangia] Thank you for the 
contribution. I have committed this to the trunk.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-337.
---
Resolution: Information Provided

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 

[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599287#comment-16599287
 ] 

genericqa commented on HDFS-13880:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
21s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 22s{color} | {color:orange} root: The patch generated 1 new + 221 unchanged 
- 0 fixed = 222 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   

[jira] [Commented] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599257#comment-16599257
 ] 

genericqa commented on HDDS-387:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 28 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  1s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdds.scm.container.TestContainerStateManager |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | 

[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-08-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599255#comment-16599255
 ] 

Anu Engineer commented on HDDS-358:
---

Patch v1 depends on HDDS-357.

cc: [~jnp], [~ljain], [~xyao], [~nandakumar131],[~elek]

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-358:
--
Attachment: HDDS-358.001.patch

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-08-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-358:
--
Summary: Use DBStore and TableStore for DeleteKeyService  (was: Use DBStore 
and TableStore for OzoneManager background services)

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-08-31 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599222#comment-16599222
 ] 

Hrishikesh Gadre commented on HDFS-13820:
-

{quote}Alternatively, provide a configuration flag to turn this feature off 
altogether.
{quote}
Note that such configuration option was available earlier but was removed as 
part of HDFS-5651. It sounds like a good idea to bring it back.

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-31 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599217#comment-16599217
 ] 

Hanisha Koneru commented on HDDS-387:
-

Hi [~msingh],

If I understand correctly,  you are suggesting we create a MiniO3FSCluster in 
the ozonefs package and MiniOzoneCluster would call this class for FileSystem 
operations?

This is a good idea. This removes the dependency on integration tests from 
ozonefs and also keeps the filesystem tests in the ozonefs package itself.

 

 

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch, HDDS-387.002.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13780) Postpone NameNode state discovery in ObserverReadProxyProvider until the first real RPC call.

2018-08-31 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-13780.

   Resolution: Duplicate
Fix Version/s: HDFS-12943

I think it was incorporated, indeed.

> Postpone NameNode state discovery in ObserverReadProxyProvider until the 
> first real RPC call.
> -
>
> Key: HDFS-13780
> URL: https://issues.apache.org/jira/browse/HDFS-13780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Fix For: HDFS-12943
>
>
> Currently {{ObserverReadProxyProvider}} during instantiation discovers 
> Observers by poking known NameNodes and checking their states. This rather 
> expensive process can be postponed until the first actual RPC call.
> This is an optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599208#comment-16599208
 ] 

Dinesh Chitlangia commented on HDDS-337:


[~anu] Alright, then I presume we can close this Jira as 'Invalid' or 'Working 
as designed'.

I will log a new Jira to add methods for verification of key names as per URI.

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - 

[jira] [Updated] (HDDS-388) Fix the name of the db profile configuration key

2018-08-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-388:
---
Fix Version/s: 0.2.1

> Fix the name of the db profile configuration key
> 
>
> Key: HDDS-388
> URL: https://issues.apache.org/jira/browse/HDDS-388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 0.2.1
>
> Attachments: HDDS-388.001.patch
>
>
> HDDS-359 introduced a new configuration for db profiles but at the end the 
> name of the configuration key in ozone-default (ozone.db.profile) is 
> different from the one which is used in the constant (hdds.db.profile). (It's 
> moved to the HddsConfigKeys at the last minute)
> As a result TestOzoneConfigurationFields is failing for precommit tests.
> Uploading the trivial fix.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands

2018-08-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599195#comment-16599195
 ] 

Íñigo Goiri commented on HDFS-13867:


The tests for  [^HDFS-13867-05.patch] passed correctly here:
https://builds.apache.org/job/PreCommit-HDFS-Build/24927/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterAdminCLI/
+1

> RBF: Add validation for max arguments for Router admin ls, clrQuota, 
> setQuota, rm and nameservice commands
> --
>
> Key: HDFS-13867
> URL: https://issues.apache.org/jira/browse/HDFS-13867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, 
> HDFS-13867-03.patch, HDFS-13867-04.patch, HDFS-13867-05.patch
>
>
> Add validation to check if the total number of arguments provided for the 
> Router Admin commands are not more than max possible.In most cases if there 
> are some non related extra parameters after the required arguments it doesn't 
> validate against this but instead perform the action with the required 
> parameters and ignore the extra ones which shouldn't be in the ideal case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-31 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-387:
--
Status: Open  (was: Patch Available)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch, HDDS-387.002.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-31 Thread Namit Maheshwari (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Maheshwari updated HDDS-387:
--
Attachment: HDDS-387.002.patch
Status: Patch Available  (was: Open)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-387.001.patch, HDDS-387.002.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599145#comment-16599145
 ] 

Anu Engineer commented on HDDS-337:
---

[~dineshchitlangia] when ozone we being designed we looked at other cloud 
storage systems like s3. Here is the guideline from S3 on the key name 
character set.

[https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html]

Bottom line: Ozone is not doing something that is radically different from S3. 
So my thoughts is to stick to the current set (that is valid URI keys) as is. I 
am sure OzoneFS might struggle with this, I am hoping that OzoneFS will never 
create these invalid names itself.

 

 

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN 

[jira] [Commented] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599150#comment-16599150
 ] 

Erik Krogen commented on HDFS-13265:


Attaching branch-2 patch to see if this will actually succeed in Jenkins 
despite other branch-2 builds failing.

The trunk patch is still pending fixing all of the test failures... Seems there 
are not too many, but they need to be fixed before this can be committed.

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265-branch-2.001.patch, 
> HDFS-13265.000.patch, HDFS-13265.001.patch, HDFS-13265.002.patch, 
> TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13265) MiniDFSCluster should set reasonable defaults to reduce resource consumption

2018-08-31 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13265:
---
Attachment: HDFS-13265-branch-2.001.patch

> MiniDFSCluster should set reasonable defaults to reduce resource consumption
> 
>
> Key: HDFS-13265
> URL: https://issues.apache.org/jira/browse/HDFS-13265
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13265-branch-2.000.patch, 
> HDFS-13265-branch-2.000.patch, HDFS-13265-branch-2.001.patch, 
> HDFS-13265.000.patch, HDFS-13265.001.patch, HDFS-13265.002.patch, 
> TestMiniDFSClusterThreads.java
>
>
> MiniDFSCluster takes its defaults from {{DFSConfigKeys}} defaults, but many 
> of these are not suitable for a unit test environment. For example, the 
> default handler thread count of 10 is definitely more than necessary for 
> (almost?) any unit test. We should set reasonable, lower defaults unless a 
> test specifically requires more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands

2018-08-31 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599141#comment-16599141
 ] 

Ayush Saxena commented on HDFS-13867:
-

Thanx [~elgoiri] for the Review!! :-) 

> RBF: Add validation for max arguments for Router admin ls, clrQuota, 
> setQuota, rm and nameservice commands
> --
>
> Key: HDFS-13867
> URL: https://issues.apache.org/jira/browse/HDFS-13867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, 
> HDFS-13867-03.patch, HDFS-13867-04.patch, HDFS-13867-05.patch
>
>
> Add validation to check if the total number of arguments provided for the 
> Router Admin commands are not more than max possible.In most cases if there 
> are some non related extra parameters after the required arguments it doesn't 
> validate against this but instead perform the action with the required 
> parameters and ignore the extra ones which shouldn't be in the ideal case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599134#comment-16599134
 ] 

Dinesh Chitlangia commented on HDDS-337:


[~anu] - Following our discussion that valid key names must include characters 
that are acceptable for a valid URI, it appears that key names can have 
wildcard characters like *, ?, and +.

What are your thoughts/comments on this?

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, 

[jira] [Commented] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599130#comment-16599130
 ] 

genericqa commented on HDFS-13867:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13867 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937937/HDFS-13867-05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d458c596a046 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8aa6c4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24927/testReport/ |
| Max. process+thread count | 1337 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24927/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add validation for max arguments for Router admin ls, clrQuota, 
> setQuota, rm and nameservice commands
> 

[jira] [Commented] (HDDS-337) keys created with key name having special character/wildcard should not allowed

2018-08-31 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599116#comment-16599116
 ] 

Dinesh Chitlangia commented on HDDS-337:


[~nilotpalnandi] - Thanks!

> keys created with key name having special character/wildcard should not 
> allowed
> ---
>
> Key: HDDS-337
> URL: https://issues.apache.org/jira/browse/HDDS-337
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> Please find the snippet of command execution. Here , the keys are created 
> with wildcard special character in its key name.
> Expectation :
> wildcard special characters should not be allowed.
>  
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d++ 
> -file /etc/services -v
> 2018-08-08 13:17:48 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d++
> File Hash : 567c100888518c1163b3462993de7d47
> Key Name : d++ does not exist, creating it
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 13:17:48 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:17:49 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 1:17:49 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_1.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/d** 
> -file /etc/passwd -v
> 2018-08-08 13:18:13 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : d**
> File Hash : b056233571cc80d6879212911cb8e500
> Key Name : d** does not exist, creating it
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 13:18:14 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 13:18:14 INFO 

[jira] [Commented] (HDFS-5376) Incremental rescanning of cached blocks and cache entries

2018-08-31 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599111#comment-16599111
 ] 

Hrishikesh Gadre commented on HDFS-5376:


[~andrew.wang] are you planning to work on this? Since I am looking into 
HDFS-13820, I would like to work on this as well as they are related.

> Incremental rescanning of cached blocks and cache entries
> -
>
> Key: HDFS-5376
> URL: https://issues.apache.org/jira/browse/HDFS-5376
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: namenode
>Affects Versions: HDFS-4949
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Major
>
> {{CacheReplicationMonitor#rescan}} is invoked whenever a new cache entry is 
> added or removed. This involves a complete rescan of all cache entries and 
> cached blocks, which is potentially expensive. It'd be better to do an 
> incremental scan instead. This would also let us incrementally re-scan on 
> namespace changes like rename and create for better caching latency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-08-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599109#comment-16599109
 ] 

Ajay Kumar commented on HDDS-351:
-

failure in TestOzoneConfigurationFields is unrelated. TestOzoneRestClient 
timeout in jenkins run, passes locally.

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13780) Postpone NameNode state discovery in ObserverReadProxyProvider until the first real RPC call.

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599105#comment-16599105
 ] 

Erik Krogen commented on HDFS-13780:


Hey [~vagarychen], this was superseded by HDFS-13779, right?

> Postpone NameNode state discovery in ObserverReadProxyProvider until the 
> first real RPC call.
> -
>
> Key: HDFS-13780
> URL: https://issues.apache.org/jira/browse/HDFS-13780
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
>
> Currently {{ObserverReadProxyProvider}} during instantiation discovers 
> Observers by poking known NameNodes and checking their states. This rather 
> expensive process can be postponed until the first actual RPC call.
> This is an optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-08-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599096#comment-16599096
 ] 

Íñigo Goiri commented on HDFS-13852:


The report looks clean and the unit test seems to pass.
A few comments:
* We should use the time duration accordingly; use 10s and 
DN_REPORT_CACHE_EXPIRE_MS_DEFAULT = TimeUnit.SECONDS.toMillis(10);
* Not sure if we can do a proper unit test for this checking we time out for 
that time. Maybe overkilled.

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852.001.patch, HDFS-13852.002.patch, 
> HDFS-13852.003.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599089#comment-16599089
 ] 

genericqa commented on HDFS-13852:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
27s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937867/HDFS-13852.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 48f2c23b2499 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8aa6c4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24924/testReport/ |
| Max. process+thread count | 943 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24924/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDFS-13774) EC: "hdfs ec -getPolicy" is not retrieving policy details when the special REPLICATION policy set on the directory

2018-08-31 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599083#comment-16599083
 ] 

Ayush Saxena commented on HDFS-13774:
-

Thanx [~xiaochen] for review!!!

Have updated the patch!!! :)

> EC: "hdfs ec -getPolicy" is not retrieving policy details when the special 
> REPLICATION policy set on the directory
> --
>
> Key: HDFS-13774
> URL: https://issues.apache.org/jira/browse/HDFS-13774
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: GetPolicy_EC.png, HDFS-13774-01.patch, 
> HDFS-13774-02.patch
>
>
>  Erasure coding: "hdfs ec -getPolicy"" is not retrieving policy details when 
> the special REPLICATION policy set on the directory
> Steps :-
>  - Create a directory "testEC"
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
> - Enable any Erasure coding policy like "XOR-2-1-1024k"
> - Set the EC Policy on the Directory
> - Get the EC policy for the directory [Received message as : "XOR-2-1-1024k" ]
> - Now again set the EC Policy on the directory as "replicate" special 
> REPLICATION policy
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
>  The policy is being set for the Directory ,but while retrieving policy 
> details its throwing error as 
>  policy for the directory is unspecified which is wrong behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13774) EC: "hdfs ec -getPolicy" is not retrieving policy details when the special REPLICATION policy set on the directory

2018-08-31 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599080#comment-16599080
 ] 

genericqa commented on HDFS-13774:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937935/HDFS-13774-02.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 4c3d96842857 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8aa6c4f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24925/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EC: "hdfs ec -getPolicy" is not retrieving policy details when the special 
> REPLICATION policy set on the directory
> --
>
> Key: HDFS-13774
> URL: https://issues.apache.org/jira/browse/HDFS-13774
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: GetPolicy_EC.png, HDFS-13774-01.patch, 
> HDFS-13774-02.patch
>
>
>  Erasure coding: "hdfs ec -getPolicy"" is not retrieving policy details when 
> the special REPLICATION policy set on the directory
> Steps :-
>  - Create a directory "testEC"
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
> - Enable any Erasure coding policy like "XOR-2-1-1024k"
> - Set the EC Policy on the Directory
> - Get the EC policy for the directory [Received message as : "XOR-2-1-1024k" ]
> - Now again set the EC Policy on the directory as "replicate" special 
> REPLICATION policy
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
>  The policy is being set for the Directory ,but while retrieving policy 
> details its throwing error as 
>  policy for the directory is unspecified which is wrong behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599074#comment-16599074
 ] 

Erik Krogen commented on HDFS-13880:


So if I understand correctly, this patch _only_ skips msync on methods which 
are annotated with {{@ReadOnly(isMasync = true)}}. This won't help with 
preventing methods such as {{getServiceState()}} from blocking; the solution 
needs to be more general. At minimum, it should assume {{Masync}} by default if 
no {{@ReadOnly}} annotation is present.

Also, right now it is only checking the name of the method.. This means that, 
for example, if {{ClientProtocol#getState()}} and 
{{HAServiceProtocol#getState()}} both exist, they will be treated the same. I 
know this situation probably won't arise, but it seems wrong to compare method 
equality based solely on the name of the method.

I also want to discuss why we do the check on the server side vs. client side. 
It seems natural for the client to make the decision about whether or not it 
wants a call to be sync'd, in the same way that it decides whether or not a 
call can be serviced by the observer. In particular, the client-side 
implementation in HDFS-13872 is quite a bit more clean than this current patch 
IMO

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599073#comment-16599073
 ] 

Ajay Kumar commented on HDDS-369:
-

test failure seems related.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands

2018-08-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13867:

Attachment: HDFS-13867-05.patch

> RBF: Add validation for max arguments for Router admin ls, clrQuota, 
> setQuota, rm and nameservice commands
> --
>
> Key: HDFS-13867
> URL: https://issues.apache.org/jira/browse/HDFS-13867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, 
> HDFS-13867-03.patch, HDFS-13867-04.patch, HDFS-13867-05.patch
>
>
> Add validation to check if the total number of arguments provided for the 
> Router Admin commands are not more than max possible.In most cases if there 
> are some non related extra parameters after the required arguments it doesn't 
> validate against this but instead perform the action with the required 
> parameters and ignore the extra ones which shouldn't be in the ideal case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599057#comment-16599057
 ] 

Erik Krogen commented on HDFS-13872:


Got it... Will move discussion to HDFS-13880, thanks for the heads up 
[~vagarychen]

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13867) RBF: Add validation for max arguments for Router admin ls, clrQuota, setQuota, rm and nameservice commands

2018-08-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13867:

Attachment: (was: HDFS-13867-05.patch)

> RBF: Add validation for max arguments for Router admin ls, clrQuota, 
> setQuota, rm and nameservice commands
> --
>
> Key: HDFS-13867
> URL: https://issues.apache.org/jira/browse/HDFS-13867
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13867-01.patch, HDFS-13867-02.patch, 
> HDFS-13867-03.patch, HDFS-13867-04.patch, HDFS-13867-05.patch
>
>
> Add validation to check if the total number of arguments provided for the 
> Router Admin commands are not more than max possible.In most cases if there 
> are some non related extra parameters after the required arguments it doesn't 
> validate against this but instead perform the action with the required 
> parameters and ignore the extra ones which shouldn't be in the ideal case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13872) Only some protocol methods should perform msync wait

2018-08-31 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599035#comment-16599035
 ] 

Chen Liang commented on HDFS-13872:
---

Somehow I missed this Jira completely...so I filed HDFS-13880 and submitted a 
patch there too...Sorry my bad!

I was taking a very similar approach at the beginning, I added a method in 
ReadOnly annotation to indicate whether this method should go through msync. 
But then I ran into an issue, which was that ReadOnly annotation is only 
applied to ClientProtocol. But when it comes down to  {{ProtobufRpcEngine}} 
layer, it actually changes from {{ClientProtocol}} to 
{{ClientNamenodeProtocol}} and the annotation could no longer be found. And 
{{ClientNamenodeProtocol}} class is actually a protobuf-generated class so we 
can not annotate there...Also, chatted with Konstantin, seems a more desired 
approach to do the check on server side.

So the approach I take in HDFS-13880 is that on server side when receiving a 
RPC call, it looks up method name in the RPC call in ClientProtocol, if the 
same name method exist, then the annotation of this method in ClientProtocol 
will be used to check if msync should be bypassed.

Again, sorry I missed this Jira earlier...

> Only some protocol methods should perform msync wait
> 
>
> Key: HDFS-13872
> URL: https://issues.apache.org/jira/browse/HDFS-13872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13872-HDFS-12943.000.patch
>
>
> Currently the implementation of msync added in HDFS-13767 waits until the 
> server has caught up to the client-specified transaction ID regardless of 
> what the inbound RPC is. This particularly causes problems for 
> ObserverReadProxyProvider (see HDFS-13779) when we try to fetch the state 
> from an observer/standby; this should be a quick operation, but it has to 
> wait for the node to catch up to the most current state. I initially thought 
> all {{HAServiceProtocol}} methods should thus be excluded from the wait 
> period, but actually I think the right approach is that _only_ 
> {{ClientProtocol}} methods should be subjected to the wait period. I propose 
> that we can do this via an annotation on client protocol which can then be 
> checked within {{ipc.Server}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13880:
--
Status: Patch Available  (was: Open)

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-08-31 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599019#comment-16599019
 ] 

Chen Liang commented on HDFS-13880:
---

[~shv] Masync is just the current name I picked for methods that do not need to 
go through the sync process (which is msync). Just replaced sync with async. 
Please feel free to propose a different term :).

[~csun] thanks for the clarification. Will double check if 
{{HAServiceProtocol}} is currently synced by msync.

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13774) EC: "hdfs ec -getPolicy" is not retrieving policy details when the special REPLICATION policy set on the directory

2018-08-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13774:

Attachment: HDFS-13774-02.patch

> EC: "hdfs ec -getPolicy" is not retrieving policy details when the special 
> REPLICATION policy set on the directory
> --
>
> Key: HDFS-13774
> URL: https://issues.apache.org/jira/browse/HDFS-13774
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: GetPolicy_EC.png, HDFS-13774-01.patch, 
> HDFS-13774-02.patch
>
>
>  Erasure coding: "hdfs ec -getPolicy"" is not retrieving policy details when 
> the special REPLICATION policy set on the directory
> Steps :-
>  - Create a directory "testEC"
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
> - Enable any Erasure coding policy like "XOR-2-1-1024k"
> - Set the EC Policy on the Directory
> - Get the EC policy for the directory [Received message as : "XOR-2-1-1024k" ]
> - Now again set the EC Policy on the directory as "replicate" special 
> REPLICATION policy
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
>  The policy is being set for the Directory ,but while retrieving policy 
> details its throwing error as 
>  policy for the directory is unspecified which is wrong behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13774) EC: "hdfs ec -getPolicy" is not retrieving policy details when the special REPLICATION policy set on the directory

2018-08-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13774:

Attachment: (was: HDFS-13774-02.patch)

> EC: "hdfs ec -getPolicy" is not retrieving policy details when the special 
> REPLICATION policy set on the directory
> --
>
> Key: HDFS-13774
> URL: https://issues.apache.org/jira/browse/HDFS-13774
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node Linux Cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: GetPolicy_EC.png, HDFS-13774-01.patch
>
>
>  Erasure coding: "hdfs ec -getPolicy"" is not retrieving policy details when 
> the special REPLICATION policy set on the directory
> Steps :-
>  - Create a directory "testEC"
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
> - Enable any Erasure coding policy like "XOR-2-1-1024k"
> - Set the EC Policy on the Directory
> - Get the EC policy for the directory [Received message as : "XOR-2-1-1024k" ]
> - Now again set the EC Policy on the directory as "replicate" special 
> REPLICATION policy
> - Get the EC policy for the directory [Received message as : "The erasure 
> coding policy of /testEC is unspecified" ]
>  The policy is being set for the Directory ,but while retrieving policy 
> details its throwing error as 
>  policy for the directory is unspecified which is wrong behavior



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13857) RBF: Choose to enable the default nameservice to write files.

2018-08-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16599010#comment-16599010
 ] 

Íñigo Goiri commented on HDFS-13857:


[^HDFS-13857.004.patch] LGTM.
Let's see if Yetus can take a run to double check the unit test.

> RBF: Choose to enable the default nameservice to write files.
> -
>
> Key: HDFS-13857
> URL: https://issues.apache.org/jira/browse/HDFS-13857
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13857.001.patch, HDFS-13857.002.patch, 
> HDFS-13857.003.patch, HDFS-13857.004.patch
>
>
> The default nameservice can provide some default properties for the namenode 
> protocol. And if we cannot find the path, we will get a location in default 
> nameservice. From my side as cluster administrator, we need all files to be 
> written in the location from the MountTableEntry. If no responding location, 
> some error will return. It is not better to happen some files are written in 
> some unknown location. We should provide a specific parameter to enable the 
> default nameservice to store files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-08-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13852:
---
Status: Patch Available  (was: Open)

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.1, 2.9.1, 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852.001.patch, HDFS-13852.002.patch, 
> HDFS-13852.003.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598956#comment-16598956
 ] 

Erik Krogen commented on HDFS-13749:


FYI [~csun], I just committed HDFS-13779, you should be good to go to check out 
HDFS-12943 branch and get all of the most recent changes :)

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16598955#comment-16598955
 ] 

Erik Krogen commented on HDFS-13779:


Good catches, Konstantin! I should have taken a closer look after my v001 -> 
v002 revisions. I also removed the {{throws IOException}} clause from the 
{{ObserverReadProxyProvider}} constructors, as it is no longer applicable. 
Attaching v003 patch, which I also just committed to HDFS-12943 branch. Thanks 
for the help [~shv] and [~vagarychen]!

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.003.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-31 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13779:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-12943
   Status: Resolved  (was: Patch Available)

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.003.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.

2018-08-31 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13779:
---
Attachment: HDFS-13779-HDFS-12943.003.patch

> Implement performFailover logic for ObserverReadProxyProvider.
> --
>
> Key: HDFS-13779
> URL: https://issues.apache.org/jira/browse/HDFS-13779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Erik Krogen
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS-13779-HDFS-12943.000.patch, 
> HDFS-13779-HDFS-12943.001.patch, HDFS-13779-HDFS-12943.002.patch, 
> HDFS-13779-HDFS-12943.003.patch, HDFS-13779-HDFS-12943.WIP00.patch
>
>
> Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method 
> from {{ConfiguredFailoverProxyProvider}}, which simply increments the index 
> and switches over to another NameNode. The logic for ORPP should be smart 
> enough to choose another observer, otherwise it can switch to a SBN, where 
> reads are disallowed, or to an ANN, which defeats the purpose of reads from 
> standby.
> This was discussed in HDFS-12976.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-08-31 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-315:
--

Assignee: Dinesh Chitlangia

> ozoneShell infoKey does not work for directories created as key and throws 
> 'KEY_NOT_FOUND' error
> 
>
> Key: HDDS-315
> URL: https://issues.apache.org/jira/browse/HDDS-315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.2.1
>
>
> infoKey for directories created using ozoneFs does not work and throws 
> 'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.
> Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
> directory throws error.
>  
>  
> {noformat}
> hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
> 2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
> 2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Lookup key failed, error:KEY_NOT_FOUND
> hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
> 2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
>  "size" : 0,
>  "keyName" : "dir1/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
>  "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
>  "size" : 0,
>  "keyName" : "dir2/"
> }, {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
>  "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
>  "size" : 0,
>  "keyName" : "dir2/b1/"{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >