[jira] [Updated] (HDDS-862) Clean up SCMNodeStat in SCMNodeManager

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-862:

Target Version/s: 0.4.0

> Clean up SCMNodeStat in SCMNodeManager
> --
>
> Key: HDDS-862
> URL: https://issues.apache.org/jira/browse/HDDS-862
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to track the work required to clean up SCMNodeStat from the 
> SCMNodeManager.
> As now in the code, we have DatanodeInfo, which stores the nodeReport. This 
> is partially plugged in as part of HDDS-817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-862) Clean up SCMNodeStat in SCMNodeManager

2018-11-20 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-862:
---

 Summary: Clean up SCMNodeStat in SCMNodeManager
 Key: HDDS-862
 URL: https://issues.apache.org/jira/browse/HDDS-862
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to track the work required to clean up SCMNodeStat from the 
SCMNodeManager.

As now in the code, we have DatanodeInfo, which stores the nodeReport. This is 
partially plugged in as part of HDDS-817.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-816:

Attachment: HDDS-816.11.patch

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, HDDS-816.11.patch, 
> Metrics for number of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694337#comment-16694337
 ] 

Bharat Viswanadham commented on HDDS-816:
-

Attached patch v11, rebased on top of trunk

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, HDDS-816.11.patch, 
> Metrics for number of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) Refactor name node to allow different token verification implementations

2018-11-20 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694316#comment-16694316
 ] 

CR Hota commented on HDFS-14006:


[~elgoiri] Thanks for the review. Took care of your comments in the next patch.

> Refactor name node to allow different token verification implementations
> 
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch, HDFS-14006.002.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14006) Refactor name node to allow different token verification implementations

2018-11-20 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14006:
---
Attachment: HDFS-14006.002.patch

> Refactor name node to allow different token verification implementations
> 
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch, HDFS-14006.002.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14006) Refactor name node to allow different token verification implementations

2018-11-20 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14006:
---
Summary: Refactor name node to allow different token verification 
implementations  (was: RBF: Support to get Router object from web context 
instead of Namenode)

> Refactor name node to allow different token verification implementations
> 
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch, HDFS-14006.002.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694314#comment-16694314
 ] 

Hadoop QA commented on HDDS-814:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-814 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948990/HDDS-814.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b439da2408af 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c8b3dfa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1780/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1780/testReport/ |
| Max. process+thread count | 467 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1780/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDFS-13811) RBF: Race condition between router admin quota update and periodic quota update service

2018-11-20 Thread Dibyendu Karmakar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dibyendu Karmakar updated HDFS-13811:
-
Attachment: HDFS-13811-000.patch

> RBF: Race condition between router admin quota update and periodic quota 
> update service
> ---
>
> Key: HDFS-13811
> URL: https://issues.apache.org/jira/browse/HDFS-13811
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
> Attachments: HDFS-13811-000.patch
>
>
> If we try to update quota of an existing mount entry and at the same time 
> periodic quota update service is running on the same mount entry, it is 
> leading the mount table to _inconsistent state._
> Here transactions are:
> A - Quota update service is fetching mount table entries.
> B - Quota update service is updating the mount table with current usage.
> A' - User is trying to update quota using admin cmd.
> and the transaction sequence is [ A A' B ]
> quota update service is updating the mount table with old quota value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694293#comment-16694293
 ] 

Ayush Saxena commented on HDFS-14085:
-

[~elgoiri] [~ajisakaa] Thanks for the agreement!!! 

By far as solution is concerned.

I would prefer it to be an on call operation to get those details only when the 
command is called rather than having any overhead running always.Since this ls 
on mount points doesn't seems to be a frequent operation.Whenever it is called 
we can get the correct info at that time only.

As far as implementation is concerned we can get the correct details from the 
getFileInfo(..) API that is already there and as of now we can use the same 
behavior it has for multiple destination.

IMO If we want to improve further the getFileInfo(..) we can track  that too 
separately, 

Just a minor suggestion.While adding multiple destinations we could have 
verified that all destinations have same permissions and owner details and then 
all ADD rather than getting into such a situation.

[~surendrasingh] Any suggestions?

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-20 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HDFS-13732:
--
Fix Version/s: (was: 3.2.0)
   3.2.1

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.1
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694285#comment-16694285
 ] 

Hudson commented on HDDS-860:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15481 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15481/])
HDDS-860. Fix TestDataValidate unit tests. Contributed by Shashikant 
(shashikant: rev c8b3dfa6250cd74fb3e449748595117b244089da)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java


> Fix TestDataValidate unit tests
> ---
>
> Key: HDDS-860
> URL: https://issues.apache.org/jira/browse/HDDS-860
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-860.000.patch
>
>
> The RandomKeyGenerator code checks the completed flag inorder to terminate 
> the dataValidation thread. It is not set even after the key processing 
> completes thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Status: Patch Available  (was: Open)

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch, HDDS-814.004.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Status: Open  (was: Patch Available)

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Attachment: HDDS-814.004.patch

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch, HDDS-814.004.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-860:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131], for the review. I have committed this change to trunk.

> Fix TestDataValidate unit tests
> ---
>
> Key: HDDS-860
> URL: https://issues.apache.org/jira/browse/HDDS-860
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-860.000.patch
>
>
> The RandomKeyGenerator code checks the completed flag inorder to terminate 
> the dataValidation thread. It is not set even after the key processing 
> completes thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694273#comment-16694273
 ] 

Ayush Saxena commented on HDFS-14075:
-

Thax [~elgoiri] for the reviews

I have handled them all in v6. :)
{quote}Why do we do:
{quote}

This is just to put this fatal occurrence in the Error logs to my knowledge and 
belief.
I had doubts too regarding it.As terminate would be logging this too.But it had 
different log level.
And for us it is an error for sure.So its here for us.

Went back and checked logSync() too which was mostly handling this exception. 
Which we are handling here;it also had similar behavior so I thought its better 
be inline with the existing  ones. 

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694277#comment-16694277
 ] 

Nanda kumar commented on HDDS-860:
--

+1, looks good to me.

> Fix TestDataValidate unit tests
> ---
>
> Key: HDDS-860
> URL: https://issues.apache.org/jira/browse/HDDS-860
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-860.000.patch
>
>
> The RandomKeyGenerator code checks the completed flag inorder to terminate 
> the dataValidation thread. It is not set even after the key processing 
> completes thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694273#comment-16694273
 ] 

Ayush Saxena edited comment on HDFS-14075 at 11/21/18 5:50 AM:
---

Thanx [~elgoiri] for the reviews

I have handled them all in v6. :)
{quote}Why do we do:
{quote}

This is just to put this fatal occurrence in the Error logs to my knowledge and 
belief.
I had doubts too regarding it.As terminate would be logging this too.But it had 
different log level.
And for us it is an error for sure.So its here for us.

Went back and checked logSync() too which was mostly handling this exception. 
Which we are handling here;it also had similar behavior so I thought its better 
be inline with the existing  ones. 


was (Author: ayushtkn):
Thax [~elgoiri] for the reviews

I have handled them all in v6. :)
{quote}Why do we do:
{quote}

This is just to put this fatal occurrence in the Error logs to my knowledge and 
belief.
I had doubts too regarding it.As terminate would be logging this too.But it had 
different log level.
And for us it is an error for sure.So its here for us.

Went back and checked logSync() too which was mostly handling this exception. 
Which we are handling here;it also had similar behavior so I thought its better 
be inline with the existing  ones. 

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This 

[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694254#comment-16694254
 ] 

Hadoop QA commented on HDFS-14088:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948980/HDFS-14088.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31ac473c3b6f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a41b648 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25574/testReport/ |
| Max. process+thread count | 446 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25574/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> 

[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694265#comment-16694265
 ] 

Hudson commented on HDDS-835:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15480 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15480/])
HDDS-835. Use storageSize instead of Long for buffer size configs in 
(shashikant: rev 14e1a0a3d6cf0566ba696a73699aa7ce6ed1f94f)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestMultipleContainerReadWrite.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/ratis/RatisHelper.java


> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694253#comment-16694253
 ] 

Hadoop QA commented on HDDS-860:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-860 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948981/HDDS-860.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2ec1c5d00a1a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a41b648 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1779/artifact/out/patch-unit-hadoop-ozone_tools.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1779/testReport/ |
| Max. process+thread count | 457 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1779/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694235#comment-16694235
 ] 

Hudson commented on HDDS-855:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15479 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15479/])
HDDS-855. Move OMMetadataManager from hadoop-ozone/ozone-manager to (xyao: rev 
f994b526a03738fea95583a9fccbac709e8ce47f)
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerLock.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerLock.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java


> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-855.00.patch
>
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-861) TestNodeManager unit tests are broken

2018-11-20 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-861:


 Summary: TestNodeManager unit tests are broken
 Key: HDDS-861
 URL: https://issues.apache.org/jira/browse/HDDS-861
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
 Fix For: 0.4.0


Many of the tests are failing with NullPointerException
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.SCMNodeManager.updateNodeStat(SCMNodeManager.java:195)
at 
org.apache.hadoop.hdds.scm.node.SCMNodeManager.register(SCMNodeManager.java:276)
at 
org.apache.hadoop.hdds.scm.TestUtils.createRandomDatanodeAndRegister(TestUtils.java:147)
at 
org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmHeartbeat(TestNodeManager.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-855:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution and all for the reviews. I've committed 
the patch to trunk.

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-855.00.patch
>
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-9:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks all for the reviews. I've commit the patch to the feature branch. 

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch, HDDS-9-HDDS-4.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-860:
-
Status: Patch Available  (was: Open)

> Fix TestDataValidate unit tests
> ---
>
> Key: HDDS-860
> URL: https://issues.apache.org/jira/browse/HDDS-860
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-860.000.patch
>
>
> The RandomKeyGenerator code checks the completed flag inorder to terminate 
> the dataValidation thread. It is not set even after the key processing 
> completes thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-860:
-
Attachment: HDDS-860.000.patch

> Fix TestDataValidate unit tests
> ---
>
> Key: HDDS-860
> URL: https://issues.apache.org/jira/browse/HDDS-860
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-860.000.patch
>
>
> The RandomKeyGenerator code checks the completed flag inorder to terminate 
> the dataValidation thread. It is not set even after the key processing 
> completes thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-860) Fix TestDataValidate unit tests

2018-11-20 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-860:


 Summary: Fix TestDataValidate unit tests
 Key: HDDS-860
 URL: https://issues.apache.org/jira/browse/HDDS-860
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


The RandomKeyGenerator code checks the completed flag inorder to terminate the 
dataValidation thread. It is not set even after the key processing completes 
thereby datavalidation thread to run indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread Yuxuan Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694217#comment-16694217
 ] 

Yuxuan Wang commented on HDFS-14088:


Thanks Íñigo Goiri for reviewing.
Here's a new patch HDFS-14088.002.patch for trunk.

I'm sorry about not explaining clearly. The patch avoid synchronize the whole 
thing with double check locking.

The original question is, after calling performFailover() and getProxy(), the 
former RequestHedgingInvocationHandler instances should be deprecated can still 
access currentUsedProxy. It can cause the handler success pass the if-condition 
{code:java}
currentUsedProxy != null
{code} 
 but immediately it turn to null because of performFailover() called by 
somebody.

My idea is, let the RequestHedgingInvocationHandler hold the currentUsedProxy 
and RequestHedgingProxyProvider hold the currentUsedHandler which warp the 
RequestHedgingInvocationHandler into a proxy like before. Fail over will set 
currentUsedHandler to null and getProxy() will assign a new 
RequestHedgingInvocationHandler to it and return. So we can avoid the 
deprecated handler can still access null currentUsedProxy after calling 
performFailover().

For the unit test, I add some metric check. The test's idea is mock a call and 
sleep, then call performFailover(). The original code 
{code:java}
if (currentUsedProxy != null) {
  try {
Object retVal = method.invoke(currentUsedProxy.proxy, args);
LOG.debug("Invocation successful on [{}]",
currentUsedProxy.proxyInfo);
return retVal;
  } 
{code}
debug log can throw a NullPointerException.
I know the idea for unit test is a little tricky, but I can't figure out better 
one.

Reformat the code.

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
> Attachments: HDFS-14088.001.patch, HDFS-14088.002.patch
>
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread Yuxuan Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxuan Wang updated HDFS-14088:
---
Attachment: HDFS-14088.002.patch

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
> Attachments: HDFS-14088.001.patch, HDFS-14088.002.patch
>
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694206#comment-16694206
 ] 

Hadoop QA commented on HDDS-816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | 

[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694198#comment-16694198
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948961/HDFS-14075-06.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b3405f6a93a8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a41b648 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25573/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25573/testReport/ |
| Max. process+thread count | 3941 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25573/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694193#comment-16694193
 ] 

Hadoop QA commented on HDDS-814:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-hdds/common: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-814 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948971/HDDS-814.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 55f45fbf3ba8 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a41b648 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1778/artifact/out/diff-checkstyle-hadoop-hdds_common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1778/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1778/testReport/ |
| Max. process+thread count | 446 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: 

[jira] [Comment Edited] (HDFS-14091) RBF: File Read and Writing is failing when security is enabled.

2018-11-20 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694127#comment-16694127
 ] 

Brahma Reddy Battula edited comment on HDFS-14091 at 11/21/18 3:27 AM:
---

[~RANith] thanks for reporting. I planned to do this under HDFS-13655,let do 
there

and this will not blocker since "dfs.encrypt.data.transfer" default value will 
be "false".( these will be enabled for data encryption)

Coming to Patch,Encryption key based on the BPID so I feel, we need to get from 
all the namespaces and return to requessted namespace.


was (Author: brahmareddy):
[~RANith] thanks for reporting. I planned to do this under HDFS-13655.

and this will not blocker since "dfs.encrypt.data.transfer" default value will 
be "false".( these will be enabled for data encryption)

Coming to Patch,Encryption key based on the BPID so I feel, we need to get from 
all the namespaces and return to requessted namespace.

> RBF: File Read and Writing is failing when security is enabled.
> ---
>
> Key: HDFS-14091
> URL: https://issues.apache.org/jira/browse/HDFS-14091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-13532
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Blocker
> Attachments: HDFS-14091.001.patch
>
>
> 2018-11-20 14:20:53,127 INFO hdfs.DataStreamer: Exception in 
> createBlockOutputStream blk_1073741872_1048
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "getDataEncryptionKey" is not supported
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:436)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDataEncryptionKey(RouterRpcServer.java:1965)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolServerSideTranslatorPB.java:1214)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
>  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1466)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1376)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>  at com.sun.proxy.$Proxy11.getDataEncryptionKey(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolTranslatorPB.java:1133)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:497)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy12.getDataEncryptionKey(Unknown Source)
>  at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1824)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:214)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1795)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)



--
This message was sent 

[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-11-20 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694153#comment-16694153
 ] 

Akira Ajisaka commented on HDFS-14011:
--

Thanks [~surendrasingh] for pinging me. Commented in HDFS-14085.

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-20 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694151#comment-16694151
 ] 

Akira Ajisaka commented on HDFS-14085:
--

bq. User from directory owner group will get permission denied exception and he 
may get confused when he execute "dfs -ls", .
Agreed. Now I'm +1 for showing the destination folders to avoid confusion from 
users.

The problems are:
* If the destination folder does not exist.
* If there are multiple destination folders and the owner/group/permissions of 
the directories are different.

When the problems occur, there are some misconfigurations in the cluster. I'd 
like to show warnings in Router Web UI and log warning message to ask the 
admins to fix them.

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Status: Patch Available  (was: Open)

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Status: Open  (was: Patch Available)

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-814) dfs.ratis.leader.election.minimum.timeout.duration should not be read by client

2018-11-20 Thread chencan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chencan updated HDDS-814:
-
Attachment: HDDS-814.003.patch

> dfs.ratis.leader.election.minimum.timeout.duration should not be read by 
> client
> ---
>
> Key: HDDS-814
> URL: https://issues.apache.org/jira/browse/HDDS-814
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-814.001.patch, HDDS-814.002.patch, 
> HDDS-814.003.patch
>
>
> dfs.ratis.leader.election.minimum.timeout.duration is read by client for the 
> following assertion.
> {code}
> Preconditions
> .assertTrue(maxRetryCount * retryInterval > 5 * leaderElectionTimeout,
> "Please make sure dfs.ratis.client.request.max.retries * "
> + "dfs.ratis.client.request.retry.interval > "
> + "5 * dfs.ratis.leader.election.minimum.timeout.duration");
> {code}
> This does not guarantee that the leader is using the same value as the 
> client. We should probably just ensure that the defaults are sane and remove 
> this assert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-20 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-14082:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

Committed to HDFS-13891 branch.
Thanks [~elgoiri] for the contribution.

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14082-HDFS-13891.002.patch, 
> HDFS-14082-HDFS-13891.003.patch, HDFS-14082.000.patch, HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-817) Create SCM metrics for disk from node report

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694141#comment-16694141
 ] 

Bharat Viswanadham commented on HDDS-817:
-

Thank You [~linyiqun] for review and commit.

> Create SCM metrics for disk from node report
> 
>
> Key: HDDS-817
> URL: https://issues.apache.org/jira/browse/HDDS-817
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-817.00.patch, HDDS-817.01.patch
>
>
> # Disk usage HDD and SSD
>  # Total no of datanodes in cluster (Running, Unhealthy, Failed) (Add a UT 
> for this implementation which already exists)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-20 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694137#comment-16694137
 ] 

Yiqun Lin commented on HDFS-14082:
--

LGTM, +1.

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14082-HDFS-13891.002.patch, 
> HDFS-14082-HDFS-13891.003.patch, HDFS-14082.000.patch, HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694130#comment-16694130
 ] 

Hadoop QA commented on HDDS-9:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 6s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 19m 
17s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
3s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m  
7s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} 

[jira] [Commented] (HDFS-14091) RBF: File Read and Writing is failing when security is enabled.

2018-11-20 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694127#comment-16694127
 ] 

Brahma Reddy Battula commented on HDFS-14091:
-

[~RANith] thanks for reporting. I planned to do this under HDFS-13655.

and this will not blocker since "dfs.encrypt.data.transfer" default value will 
be "false".( these will be enabled for data encryption)

Coming to Patch,Encryption key based on the BPID so I feel, we need to get from 
all the namespaces and return to requessted namespace.

> RBF: File Read and Writing is failing when security is enabled.
> ---
>
> Key: HDFS-14091
> URL: https://issues.apache.org/jira/browse/HDFS-14091
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-13532
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Blocker
> Attachments: HDFS-14091.001.patch
>
>
> 2018-11-20 14:20:53,127 INFO hdfs.DataStreamer: Exception in 
> createBlockOutputStream blk_1073741872_1048
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "getDataEncryptionKey" is not supported
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:436)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getDataEncryptionKey(RouterRpcServer.java:1965)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolServerSideTranslatorPB.java:1214)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
>  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1466)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1376)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>  at com.sun.proxy.$Proxy11.getDataEncryptionKey(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDataEncryptionKey(ClientNamenodeProtocolTranslatorPB.java:1133)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:497)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy12.getDataEncryptionKey(Unknown Source)
>  at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:1824)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:214)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1795)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694123#comment-16694123
 ] 

Bharat Viswanadham commented on HDDS-816:
-

I think Jenkins is not working properly, as run is failed with finding ratis 
0.1.0-SNAPSHOT.

Retriggered Jenkins run. 

I will wait until tomorrow if further no comments, I will commit this. 

 
{code:java}
Failed to execute goal on project hadoop-hdds-common: Could not resolve 
dependencies for project 
org.apache.hadoop:hadoop-hdds-common:jar:0.4.0-SNAPSHOT: Failure to find 
org.apache.ratis:ratis-thirdparty:jar:0.1.0-SNAPSHOT in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced{code}

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694123#comment-16694123
 ] 

Bharat Viswanadham edited comment on HDDS-816 at 11/21/18 2:27 AM:
---

I think Jenkins is not working properly, as run is failed with finding ratis 
0.1.0-SNAPSHOT.

Retriggered Jenkins run. 

I will wait until tomorrow if further no comments, I will commit this tomorrow 
morning.

 
{code:java}
Failed to execute goal on project hadoop-hdds-common: Could not resolve 
dependencies for project 
org.apache.hadoop:hadoop-hdds-common:jar:0.4.0-SNAPSHOT: Failure to find 
org.apache.ratis:ratis-thirdparty:jar:0.1.0-SNAPSHOT in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced{code}


was (Author: bharatviswa):
I think Jenkins is not working properly, as run is failed with finding ratis 
0.1.0-SNAPSHOT.

Retriggered Jenkins run. 

I will wait until tomorrow if further no comments, I will commit this. 

 
{code:java}
Failed to execute goal on project hadoop-hdds-common: Could not resolve 
dependencies for project 
org.apache.hadoop:hadoop-hdds-common:jar:0.4.0-SNAPSHOT: Failure to find 
org.apache.ratis:ratis-thirdparty:jar:0.1.0-SNAPSHOT in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced{code}

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694122#comment-16694122
 ] 

Hadoop QA commented on HDDS-816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | 

[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694117#comment-16694117
 ] 

Hadoop QA commented on HDDS-816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m  
9s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 14m 
59s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} ozone-manager in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} integration-test in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} ozone-manager in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
9s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 13m 
46s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 46s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} ozone-manager in the patch 

[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Attachment: HDFS-14075-06.patch

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch, HDFS-14075-06.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693977#comment-16693977
 ] 

Hadoop QA commented on HDDS-816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | 

[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693952#comment-16693952
 ] 

Xiaoyu Yao commented on HDDS-9:
---

Fix the check style issue with v5 patch. 

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch, HDDS-9-HDDS-4.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693956#comment-16693956
 ] 

Arpit Agarwal commented on HDDS-816:


Thanks [~bharatviswa]. +1 for the v10 patch, pending Jenkins.

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693954#comment-16693954
 ] 

Bharat Viswanadham commented on HDDS-816:
-

Thank You [~arpitagarwal] for showing an issue in KeyManagerImpl stop logic in 
patch v09.

Updated to address the issue in patch v10.

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-816:

Attachment: HDDS-816.10.patch

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, HDDS-816.10.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-9:
--
Attachment: HDDS-9-HDDS-4.005.patch

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch, HDDS-9-HDDS-4.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693947#comment-16693947
 ] 

Bharat Viswanadham commented on HDDS-816:
-

Thank You [~arpitagarwal] for the offline discussion.

Updated the start and stop logic not to use initialized flag.

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, Metrics for number of volumes, buckets, 
> keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-816:

Attachment: HDDS-816.09.patch

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, HDDS-816.09.patch, Metrics for number of volumes, buckets, 
> keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-859) Fix NPE ServerUtils#getOzoneMetaDirPath

2018-11-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-859:

Description: 
This can be reproed with "mvn test" under hadoop-ozone project but not with 
individual test run under IntelliJ.

 
{code:java}
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
FAILURE! - in org.apache.hadoop.ozone.TestOmUtils

testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
0.028 s  <<< FAILURE!

java.lang.AssertionError:

 

Expected: an instance of java.lang.IllegalArgumentException

     but:  is a java.lang.NullPointerException

Stacktrace was: java.lang.NullPointerException

        at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)

        at 
org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)

        at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)

        at 
org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)

        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)

        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

 

{code}

  was:
This can be reproed with TestOmUtils#testNoOmDbDirConfigured

 

{code}

Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
FAILURE! - in org.apache.hadoop.ozone.TestOmUtils

testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
0.028 s  <<< FAILURE!

java.lang.AssertionError:

 

Expected: an instance of java.lang.IllegalArgumentException

     but:  is a java.lang.NullPointerException

Stacktrace was: java.lang.NullPointerException

        at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)

        at 
org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)

        at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)

        at 
org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)

        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)

        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

 

{code}


> Fix NPE ServerUtils#getOzoneMetaDirPath
> ---
>
> Key: HDDS-859
> URL: https://issues.apache.org/jira/browse/HDDS-859
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> This can be reproed with "mvn test" under hadoop-ozone project but not with 
> individual test run under IntelliJ.
>  
> {code:java}
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestOmUtils
> testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
> 0.028 s  <<< FAILURE!
> java.lang.AssertionError:
>  
> Expected: an instance of java.lang.IllegalArgumentException
>      but:  is a java.lang.NullPointerException
> Stacktrace was: java.lang.NullPointerException
>         at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
>         at 
> org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)
>         at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)
>         at 
> org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> 

[jira] [Commented] (HDDS-791) Support Range header for ozone s3 object download

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693925#comment-16693925
 ] 

Hadoop QA commented on HDDS-791:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-ozone/s3gateway generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} s3gateway in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-ozone/s3gateway |
|  |  org.apache.hadoop.ozone.s3.endpoint.ObjectEndpoint.get(String, String, 
InputStream) may fail to close stream  At ObjectEndpoint.java:to close stream  
At ObjectEndpoint.java:[line 207] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948925/HDDS-791.02.patch |
| Optional 

[jira] [Commented] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693922#comment-16693922
 ] 

Hadoop QA commented on HDFS-14006:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 109 unchanged - 0 fixed = 111 total (was 109) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14006 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948920/HDFS-14006.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d74f986b33c9 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1734ace |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25572/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Created] (HDDS-859) Fix NPE ServerUtils#getOzoneMetaDirPath

2018-11-20 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-859:
---

 Summary: Fix NPE ServerUtils#getOzoneMetaDirPath
 Key: HDDS-859
 URL: https://issues.apache.org/jira/browse/HDDS-859
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This can be reproed with TestOmUtils#testNoOmDbDirConfigured

 

{code}

Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.33 s <<< 
FAILURE! - in org.apache.hadoop.ozone.TestOmUtils

testNoOmDbDirConfigured(org.apache.hadoop.ozone.TestOmUtils)  Time elapsed: 
0.028 s  <<< FAILURE!

java.lang.AssertionError:

 

Expected: an instance of java.lang.IllegalArgumentException

     but:  is a java.lang.NullPointerException

Stacktrace was: java.lang.NullPointerException

        at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)

        at 
org.apache.hadoop.hdds.server.ServerUtils.getOzoneMetaDirPath(ServerUtils.java:130)

        at org.apache.hadoop.ozone.OmUtils.getOmDbDir(OmUtils.java:141)

        at 
org.apache.hadoop.ozone.TestOmUtils.testNoOmDbDirConfigured(TestOmUtils.java:89)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)

        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)

        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

 

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693891#comment-16693891
 ] 

Bharat Viswanadham commented on HDDS-816:
-

Thank You [~arpitagarwal] for the review.

I have addressed all the review comments, except the below one.
 # 
{quote}Should _intialized = true_ be set inside the start method itself? Else 
the second time you invoke start, initialized will be left to false.{quote}

I think we don't need to this in initialize, as when object is constructed we 
call start() and set it to true, and if we again call start(), then we should 
not do initialization again, which will be taken care by if (!initialized), as 
initialized is set to true.

 

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, Metrics for number of volumes, buckets, keys.pdf, Proposed 
> Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693893#comment-16693893
 ] 

Ajay Kumar commented on HDDS-9:
---

+1 with checkstyle addressed.

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-20 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-858:
---

 Summary: Start a Standalone Ratis Server on OM
 Key: HDDS-858
 URL: https://issues.apache.org/jira/browse/HDDS-858
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: OM
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


We propose implementing a standalone Ratis server on OM, as a start. Once the 
Ratis server and state machine are integrated into OM, then the replicated 
Ratis state machine can be implemented for OM.

This Jira aims to just start a Ratis server on OM start. The client-OM 
communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-816:

Attachment: HDDS-816.08.patch

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> HDDS-816.08.patch, Metrics for number of volumes, buckets, keys.pdf, Proposed 
> Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-791) Support Range header for ozone s3 object download

2018-11-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693876#comment-16693876
 ] 

Bharat Viswanadham commented on HDDS-791:
-

Thank You, [~elek] for the review.

I have addressed the Jenkins reported issues.

 

> Support Range header for ozone s3 object download
> -
>
> Key: HDDS-791
> URL: https://issues.apache.org/jira/browse/HDDS-791
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-791.00.patch, HDDS-791.01.patch, HDDS-791.02.patch
>
>
> Using s3 rest api smaller chunks of an object could be uploaded with using 
> Range headers:
> For example:
> {code}
> GET /example-object HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> x-amz-date: Fri, 28 Jan 2011 21:32:02 GMT
> Range: bytes=0-9
> Authorization: AWS AKIAIOSFODNN7EXAMPLE:Yxg83MZaEgh3OZ3l0rLo5RTX11o=
> Sample Response with Specified Range of the Object Bytes
> {code}
> Can be implemented with using the seek method on OzoneInputStream.
> The Range header  support is one of the missing piece for fully support s3a 
> interface.
> References:
> Range header spec:
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
> Aws s3 doc:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-791) Support Range header for ozone s3 object download

2018-11-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-791:

Attachment: HDDS-791.02.patch

> Support Range header for ozone s3 object download
> -
>
> Key: HDDS-791
> URL: https://issues.apache.org/jira/browse/HDDS-791
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-791.00.patch, HDDS-791.01.patch, HDDS-791.02.patch
>
>
> Using s3 rest api smaller chunks of an object could be uploaded with using 
> Range headers:
> For example:
> {code}
> GET /example-object HTTP/1.1
> Host: example-bucket.s3.amazonaws.com
> x-amz-date: Fri, 28 Jan 2011 21:32:02 GMT
> Range: bytes=0-9
> Authorization: AWS AKIAIOSFODNN7EXAMPLE:Yxg83MZaEgh3OZ3l0rLo5RTX11o=
> Sample Response with Specified Range of the Object Bytes
> {code}
> Can be implemented with using the seek method on OzoneInputStream.
> The Range header  support is one of the missing piece for fully support s3a 
> interface.
> References:
> Range header spec:
> https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
> Aws s3 doc:
> https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-855) Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common

2018-11-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693861#comment-16693861
 ] 

Xiaoyu Yao commented on HDDS-855:
-

+1 for the patch. But it seems all ozone tests are busted by some 
maven/surefile issue.

{code}

ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd 
/testptch/hadoop/hadoop-ozone/common && 
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/testptch/hadoop/hadoop-ozone/common/target/surefire/surefirebooter6498890699449154480.jar
 /testptch/hadoop/hadoop-ozone/common/target/surefire 
2018-11-19T20-57-54_125-jvmRun2 surefire7299819707577424654tmp 
surefire_47360755220146958890tmp [ERROR] Error occurred in starting fork, check 
output in log [ERROR] Process Exit Code: 1 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:494)
 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:441)
 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:293)
 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1149)
 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:978)
 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:854)
 [ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) 
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) 
[ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) 
[ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
 [ERROR] at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
 [ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) 
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) 
[ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) [ERROR] 
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) [ERROR] at 
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) [ERROR] at 
org.apache.maven.cli.MavenCli.main(MavenCli.java:199) [ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [ERROR] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
[ERROR] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [ERROR] at java.lang.reflect.Method.invoke(Method.java:498) [ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
 [ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) 
[ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
 [ERROR] at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)

{code}

> Move OMMetadataManager from hadoop-ozone/ozone-manager to hadoop-ozone/common
> -
>
> Key: HDDS-855
> URL: https://issues.apache.org/jira/browse/HDDS-855
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-855.00.patch
>
>
> Move {{OMMetadataManager}} from hadoop-ozone/ozone-manager to 
> hadoop-ozone/common. This will allow usage if OMMetadataManagerImpl in 
> {{SecurityManager}} which will be in common module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693846#comment-16693846
 ] 

Arpit Agarwal edited comment on HDDS-816 at 11/20/18 10:21 PM:
---

A few comments:
 # You can just replace this comment:
{code:java}
   * Returns number of rows in a table.  This should not be used on the keyTable
   * which will take a very long time. Currently this is used for counting now
   * of buckets and volumes from bucket and volume table during OM start.
{code}
with something like
{code:java}
   * Returns number of rows in a table.  This should not be used for very large 
tables.
{code}
 # Should _intialized = true_ be set inside the start method itself? Else the 
second time you invoke start, initialized will be left to false.
 # Bad import in OMMetadataManager.java and OmMetadataManagerImpl. Where is it 
being used?
{code:java}
import com.sun.tools.internal.ws.wsdl.document.jaxws.Exception;
{code}
 # Let's avoid caching a copy of the conf object. It's better to pass 
configuration as a parameter to the start routine. This applies to both 
KeyManagerImpl and  OmMetadataManagerImpl.
{code:java}
this.configuration = conf;
{code}
 # Let's Remove this comment line.
{code:java}
  // Which can be useful for ozone admins, to know about the system.
{code}
 # I think this comment can be made clearer:
{code:java}
  //TODO: After an unclean shutdown, this value might have inaccuracy of
  // actual original key count. The inaccuracy can be invalid forever and
  // the difference depends from the persisting time period + no of keys
  // created during this period.
{code}
Instead you can say something like
{code:java}
This metric is an estimate and it may be inaccurate on restart if the OM 
process was not shutdown cleanly. 
Key creations/deletions in the last few minutes before restart may not be 
included in this count.
{code}


was (Author: arpitagarwal):
A few comments:
 # You can just replace this comment:
{code:java}
   * Returns number of rows in a table.  This should not be used on the keyTable
   * which will take a very long time. Currently this is used for counting now
   * of buckets and volumes from bucket and volume table during OM start.
{code}
with something like
{code:java}
   * Returns number of rows in a table.  This should not be used for very large 
tables.
{code}

 # Should _intialized = true_ be set inside the start method itself? Else the 
second time you invoke start, initialized will be left to false.
 # Bad import in OMMetadataManager.java and OmMetadataManagerImpl. Where is it 
being used?
{code:java}
import com.sun.tools.internal.ws.wsdl.document.jaxws.Exception;
{code}

 # Let's avoid caching a copy of the conf object. It's better to pass 
configuration as a parameter to the start routine. This applies to both 
KeyManagerImpl and  OmMetadataManagerImpl.
{code:java}
this.configuration = conf;
{code}

 # Let's Remove this comment line.
{code:java}
  // Which can be useful for ozone admins, to know about the system.
{code}

 # I think this comment can be made clearer:
{code:java}
  //TODO: After an unclean shutdown, this value might have inaccuracy of
  // actual original key count. The inaccuracy can be invalid forever and
  // the difference depends from the persisting time period + no of keys
  // created during this period.
{code}
Instead you can say something like
{code:java}
This metric is an estimate and it may be inaccurate on restart if the OM 
process was not shutdown cleanly. 
Key creations/deletions in the last few minutes before restart may not be 
included in this count.
{code}

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> Metrics for number of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-20 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693846#comment-16693846
 ] 

Arpit Agarwal commented on HDDS-816:


A few comments:
 # You can just replace this comment:
{code:java}
   * Returns number of rows in a table.  This should not be used on the keyTable
   * which will take a very long time. Currently this is used for counting now
   * of buckets and volumes from bucket and volume table during OM start.
{code}
with something like
{code:java}
   * Returns number of rows in a table.  This should not be used for very large 
tables.
{code}

 # Should _intialized = true_ be set inside the start method itself? Else the 
second time you invoke start, initialized will be left to false.
 # Bad import in OMMetadataManager.java and OmMetadataManagerImpl. Where is it 
being used?
{code:java}
import com.sun.tools.internal.ws.wsdl.document.jaxws.Exception;
{code}

 # Let's avoid caching a copy of the conf object. It's better to pass 
configuration as a parameter to the start routine. This applies to both 
KeyManagerImpl and  OmMetadataManagerImpl.
{code:java}
this.configuration = conf;
{code}

 # Let's Remove this comment line.
{code:java}
  // Which can be useful for ozone admins, to know about the system.
{code}

 # I think this comment can be made clearer:
{code:java}
  //TODO: After an unclean shutdown, this value might have inaccuracy of
  // actual original key count. The inaccuracy can be invalid forever and
  // the difference depends from the persisting time period + no of keys
  // created during this period.
{code}
Instead you can say something like
{code:java}
This metric is an estimate and it may be inaccurate on restart if the OM 
process was not shutdown cleanly. 
Key creations/deletions in the last few minutes before restart may not be 
included in this count.
{code}

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, HDDS-816.07.patch, 
> Metrics for number of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693845#comment-16693845
 ] 

Íñigo Goiri commented on HDFS-14006:


The refactor looks pretty reasonable.
I have a couple nits:
* Line break before the javadoc in {{TokenVerifier}}.
* {{web hdfs}} to {{WebHDFS}}.
* A little longer javadoc in {{TokenVerifier}} to explain who uses this and so 
on.
* Move the javadoc to {{TokenVerifier#verifyToken}} and leave 
{{NameNode#verifyToken}} as Override without javadoc.

I would also change the title of the JIRA there is no RBF change here, all 
there is a refactor in the NN.

> RBF: Support to get Router object from web context instead of Namenode
> --
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693822#comment-16693822
 ] 

Hadoop QA commented on HDFS-14064:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
25s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948904/HDFS-14064-05.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f1cc253c5fcb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c747830 |

[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693811#comment-16693811
 ] 

Hadoop QA commented on HDDS-9:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
4s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 17m  
7s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 2 new + 2 unchanged - 
0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  9m 
54s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | 

[jira] [Commented] (HDDS-284) CRC for ChunksData

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693809#comment-16693809
 ] 

Hadoop QA commented on HDDS-284:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF 

[jira] [Updated] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-20 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14006:
---
Attachment: HDFS-14006.001.patch
Status: Patch Available  (was: Open)

> RBF: Support to get Router object from web context instead of Namenode
> --
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14006.001.patch
>
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-20 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693789#comment-16693789
 ] 

CR Hota commented on HDFS-13972:


[~elgoiri] [~brahmareddy] Thanks for your initial review.

Based on what Brahma has mentioned, I am then going to first refactor name node 
and then come back and look at this again.

Had anyways created the refactor ticket earlier knowing that this is 
inevitable. HDFS-14006

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693775#comment-16693775
 ] 

Íñigo Goiri commented on HDFS-14075:


Thanks for the clarification.
Why do we do:
{code}
LOG.error(msg, new Exception());
{code}
Is that to print the stack trace?

Some other minor comments:
* For {{TestEditLog}}#986, we should keep the line breaking as before.
* Avoid line break in TestEditLogJournalFailures#292
* Avoid TestEditLogJournalFailures#56
* {{assertTrue(re.getClassName().contains("ExitException"));}} could check the 
exception itself.

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693769#comment-16693769
 ] 

Íñigo Goiri commented on HDFS-14088:


Thanks [~John Smith] for the patch.
There are a lot of changes but in reality is just adding the synchronized chunk.
We are pretty much doing everything within a synchronized block except for the 
exception report and the null check.
Not sure if there's a point on that and not making the whole thing synchronized.

Then regarding currentUsedHandler and currentUsedProxy, it might be better to 
just leave the variables as they were and have a lock variable.

For the unit test, should we check some metric in particular?

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
> Attachments: HDFS-14088.001.patch
>
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693741#comment-16693741
 ] 

Hudson commented on HDDS-835:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15477 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15477/])
Revert "HDDS-835. Use storageSize instead of Long for buffer size (shashikant: 
rev 1734ace35f1c92ff37ccf7f8545b4d74ecbc1cca)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/ratis/RatisHelper.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestMultipleContainerReadWrite.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestFailureHandlingByClient.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java


> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch, HDDS-835.001.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-804) Block token: Add secret token manager

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693728#comment-16693728
 ] 

Hadoop QA commented on HDDS-804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
12s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
10s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
21s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948903/HDDS-804-HDDS-4.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5dce4b1de2e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / ffe5e7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1769/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
| unit | 

[jira] [Commented] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693712#comment-16693712
 ] 

Hadoop QA commented on HDFS-14082:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948901/HDFS-14082-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 007758cfa3f4 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4d8cc85 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25569/testReport/ |
| Max. process+thread count | 1463 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25569/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HDDS-284) CRC for ChunksData

2018-11-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693708#comment-16693708
 ] 

Hanisha Koneru commented on HDDS-284:
-

{quote}1. Checksum#longToBytes  can be replaced with Longs.toByteArray() from 
com.google.common.primitives.Longs package.
{quote}
Addressed this in patch v06. Also fixed the checkstyle and findbug errors.

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.006.patch, HDDS-284.01.patch, 
> HDDS-284.02.patch, HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and 
> Error Detection for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>   _required string chunkName =_ _1__;_
>   _required uint64 offset =_ _2__;_
>   _required uint64 len =_ _3__;_
>   _optional string checksum =_ _4__;_
>   _repeated KeyValue metadata =_ _5__;_
>  _}_
>  
> Proposal is to change ChunkInfo structure as below: 
> _message ChunkInfo {_
>  _required string chunkName = 1 ;_
>  _required uint64 offset = 2 ;_
>  _required uint64 len = 3 ;_
>  _repeated KeyValue metadata = 4;_
>  _required ChecksumData checksumData = 5;_
> _}_
>  
> The ChecksumData structure would be as follows: 
> _message ChecksumData {_
>  _required ChecksumType type = 1;_ 
>  _required uint32 bytesPerChecksum = 2;_ 
>  _repeated bytes checksums = 3;_
> _}_
>  
> Instead of changing disk format, we put the checksum into chunkInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-284) CRC for ChunksData

2018-11-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-284:

Attachment: HDDS-284.006.patch

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.006.patch, HDDS-284.01.patch, 
> HDDS-284.02.patch, HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and 
> Error Detection for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>   _required string chunkName =_ _1__;_
>   _required uint64 offset =_ _2__;_
>   _required uint64 len =_ _3__;_
>   _optional string checksum =_ _4__;_
>   _repeated KeyValue metadata =_ _5__;_
>  _}_
>  
> Proposal is to change ChunkInfo structure as below: 
> _message ChunkInfo {_
>  _required string chunkName = 1 ;_
>  _required uint64 offset = 2 ;_
>  _required uint64 len = 3 ;_
>  _repeated KeyValue metadata = 4;_
>  _required ChecksumData checksumData = 5;_
> _}_
>  
> The ChecksumData structure would be as follows: 
> _message ChecksumData {_
>  _required ChecksumType type = 1;_ 
>  _required uint32 bytesPerChecksum = 2;_ 
>  _repeated bytes checksums = 3;_
> _}_
>  
> Instead of changing disk format, we put the checksum into chunkInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693698#comment-16693698
 ] 

Xiaoyu Yao commented on HDDS-9:
---

Thanks [~ajayydv] for the review. I've addressed your 2/3 comments in patch v4. 
{quote} * Shall we make X509Certificate a class field instead of initializing 
it in {{verify}}?{quote}
The certificate is queried from certificate client inside the verifier with the 
the OmCertSerialId decoded from the token string. 

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-9) Add GRPC protocol interceptors for Ozone Block Token

2018-11-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-9:
--
Attachment: HDDS-9-HDDS-4.004.patch

> Add GRPC protocol interceptors for Ozone Block Token
> 
>
> Key: HDDS-9
> URL: https://issues.apache.org/jira/browse/HDDS-9
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-9-HDDS-4.001.patch, HDDS-9-HDDS-4.002.patch, 
> HDDS-9-HDDS-4.003.patch, HDDS-9-HDDS-4.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693693#comment-16693693
 ] 

Hadoop QA commented on HDFS-14088:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948842/HDFS-14088.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8dcc140c29d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c747830 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25570/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25570/testReport/ |
| Max. process+thread count | 411 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25570/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693662#comment-16693662
 ] 

Ayush Saxena commented on HDFS-14075:
-

{quote}What's the behavior if we keep:
{quote}
[~elgoiri] In practical situations in this scenario the result stays same that 
the cluster terminates.It doesn't startup. If we keep the existing behavior 
means don't add checkExitOnShutdown(false) The miniDFSCluster will throw us an 
AssertionException because the cluster terminated before it itself calling 
shutdown. On account of this same exception ,just at latter stage.

Practically the result is same in actuals.But MiniDFSCluster has a check to 
make sure the test didn't terminate before we explicitly calling shutdown.If we 
are expecting it to terminate this is the property added to prevent that 
AssertionException and work it normally.

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-20 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693656#comment-16693656
 ] 

Ayush Saxena commented on HDFS-14064:
-

Thanks [~brahmareddy] for the review. :)

Have uploaded v5 with said changes. 

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-20 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14064:

Attachment: HDFS-14064-05.patch

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch, 
> HDFS-14064-05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-804) Block token: Add secret token manager

2018-11-20 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-804:

Attachment: HDDS-804-HDDS-4.01.patch

> Block token: Add secret token manager
> -
>
> Key: HDDS-804
> URL: https://issues.apache.org/jira/browse/HDDS-804
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-804-HDDS-4.00.patch, HDDS-804-HDDS-4.01.patch
>
>
> Add secret manager to process block tokens in OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693650#comment-16693650
 ] 

Hadoop QA commented on HDDS-795:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-795 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-795 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948902/HDDS-795.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1768/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch, HDDS-795.006.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14082) RBF: Add option to fail operations when a subcluster is unavailable

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693624#comment-16693624
 ] 

Íñigo Goiri commented on HDFS-14082:


Thanks [~linyiqun] for the comment, I changed it a little but I would like to 
keep the check for smaller as it shows the expected semantic better than 
{{assertNotEquals()}}.
Take a look at [^HDFS-14082-HDFS-13891.003.patch].

> RBF: Add option to fail operations when a subcluster is unavailable
> ---
>
> Key: HDFS-14082
> URL: https://issues.apache.org/jira/browse/HDFS-14082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14082-HDFS-13891.002.patch, 
> HDFS-14082-HDFS-13891.003.patch, HDFS-14082.000.patch, HDFS-14082.001.patch
>
>
> When a subcluster is unavailable, we succeed operations like 
> {{getListing()}}. We should add an option to fail the operation if one of the 
> subclusters is unavailable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-20 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693637#comment-16693637
 ] 

Elek, Marton commented on HDDS-795:
---

Ups. Thanks. My comment was changed back during my fight with git rebase. 

Now I fixed it together with the one remaining checkstyle issue.

{code}
   * Initialize an atomic batch operation which can hold multiple PUT/DELETE
   * operations and committed later in one step.
   *
   * @return BatchOperation holder which can be used to add or commit batch
   * operations.
   */
  BatchOperation initBatchOperation();
{code}

> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch, HDDS-795.006.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693630#comment-16693630
 ] 

Íñigo Goiri commented on HDFS-14075:


Should we keep testing the old behavior in TestNNWithQJM?
What's the behavior if we keep:
{code}
cluster = new MiniDFSCluster.Builder(conf)  
.numDataNodes(0)
.manageNameDfsDirs(false)   
.format(false)  
.build();
{code}
?

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch, HDFS-14075-05.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-20 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-795:
--
Attachment: HDDS-795.006.patch

> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch, HDDS-795.006.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693627#comment-16693627
 ] 

Íñigo Goiri edited comment on HDFS-14089 at 11/20/18 6:28 PM:
--

We have a MiniKDC setup in HDFS-12284.
Can we use that setup to trigger that exception?


was (Author: elgoiri):
We have a MiniKDC setup in HDFS-12284.
Can we use this to trigger this exception.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-20 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14088:
---
Status: Patch Available  (was: Open)

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
> Attachments: HDFS-14088.001.patch
>
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693627#comment-16693627
 ] 

Íñigo Goiri commented on HDFS-14089:


We have a MiniKDC setup in HDFS-12284.
Can we use this to trigger this exception.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16693626#comment-16693626
 ] 

Íñigo Goiri commented on HDFS-14085:


OK, I'm not sure how to handler points in multiple subclusters though.

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >