[jira] [Work logged] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?focusedWorklogId=317184=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317184
 ]

ASF GitHub Bot logged work on HDDS-2168:


Author: ASF GitHub Bot
Created on: 24/Sep/19 05:41
Start Date: 24/Sep/19 05:41
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1509: HDDS-2168. 
TestOzoneManagerDoubleBufferWithOMResponse sometimes fails…
URL: https://github.com/apache/hadoop/pull/1509#issuecomment-534398049
 
 
   LGTM +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317184)
Time Spent: 50m  (was: 40m)

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-23 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936410#comment-16936410
 ] 

Jinglun commented on HDFS-14814:


Thanks [~ayushtkn] [~elgoiri] for your nice comments !  Follow your suggestions 
in v05. Pending jenkins.

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936409#comment-16936409
 ] 

Hadoop QA commented on HDDS-1868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m 
46s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 24s{color} | 
{color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 18s{color} | 
{color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 

[jira] [Updated] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-23 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14814:
---
Attachment: HDFS-14814.005.patch

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-23 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Attachment: HDFS-14850.005.patch

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch, 
> HDFS-14850.005.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317180=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317180
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 24/Sep/19 05:24
Start Date: 24/Sep/19 05:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1510: 
HDDS-2170. Add Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510#discussion_r327431601
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
 ##
 @@ -188,6 +237,29 @@ public int hashCode() {
 private long quotaInBytes;
 private Map metadata;
 private OmOzoneAclMap aclMap;
+private long objectID;
+private long updateID;
+
+/**
+ * Sets the Object ID for this Object.
+ * Object ID are unique and immutable identifier for each object in the
+ * System.
+ * @param objectID - long
+ */
+public void setObjectID(long objectID) {
+  this.objectID = objectID;
 
 Review comment:
   Just a question, why we need to have objectID and updateID?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317180)
Time Spent: 1h  (was: 50m)

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936400#comment-16936400
 ] 

Hadoop QA commented on HDFS-14859:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 531 unchanged - 
0 fixed = 532 total (was 531) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 15 new + 3 unchanged - 0 fixed = 18 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981118/HDFS-14859.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 02c4eb79c4d4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8e7d7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| javac | 

[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.04.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: (was: HDDS-1868.04.patch)

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936374#comment-16936374
 ] 

Siddharth Wagle edited comment on HDDS-1868 at 9/24/19 4:47 AM:


[~ljain] Can you please review new changes in 04 this patch and the RATIS one? 
I will write a unit test thereafter.


was (Author: swagle):
[~ljain] Can you please review new changes in this patch and the RATIS one? I 
will write a unit test thereafter.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.04.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: (was: HDDS-1868.04.patch)

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936381#comment-16936381
 ] 

Akira Ajisaka edited comment on HDFS-14845 at 9/24/19 4:29 AM:
---

Thanks [~Prabhu Joseph] for the patch and thanks [~eyang] for the review. 
Tested the 004 patch in a dev cluster and worked as expected.

Would you add the deprecated properties in DeprecatedProperties.md?


was (Author: ajisakaa):
Thanks [~Prabhu Joseph] for the patch and thanks [~eyang] for the review. 
Tested the 004 patch and worked as expected.

Would you add the deprecated properties in DeprecatedProperties.md?

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936381#comment-16936381
 ] 

Akira Ajisaka commented on HDFS-14845:
--

Thanks [~Prabhu Joseph] for the patch and thanks [~eyang] for the review. 
Tested the 004 patch and worked as expected.

Would you add the deprecated properties in DeprecatedProperties.md?

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14832) RBF : Add Icon for ReadOnly False

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936378#comment-16936378
 ] 

Hadoop QA commented on HDFS-14832:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-14832 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14832 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27948/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF : Add Icon for ReadOnly False
> -
>
> Key: HDFS-14832
> URL: https://issues.apache.org/jira/browse/HDFS-14832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14832.001.patch, HDFS-14832.002.patch, 
> HDFS-14832.003.patch, HDFS-14832.after.JPG, ReadWrite.After.JPG, 
> ReadWrite.before.JPG, Screenshot from 2019-09-18 23-55-17.png
>
>
> In Router Web UI for Mount Table information , add icon for read only state 
> false 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936379#comment-16936379
 ] 

Hadoop QA commented on HDDS-1868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 13s{color} 
| {color:red} HDDS-1868 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-1868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981121/HDDS-1868.04.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2789/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14832) RBF : Add Icon for ReadOnly False

2019-09-23 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936377#comment-16936377
 ] 

hemanthboyina commented on HDFS-14832:
--

attached screenshot after the fix [~elgoiri]

> RBF : Add Icon for ReadOnly False
> -
>
> Key: HDFS-14832
> URL: https://issues.apache.org/jira/browse/HDFS-14832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14832.001.patch, HDFS-14832.002.patch, 
> HDFS-14832.003.patch, HDFS-14832.after.JPG, ReadWrite.After.JPG, 
> ReadWrite.before.JPG, Screenshot from 2019-09-18 23-55-17.png
>
>
> In Router Web UI for Mount Table information , add icon for read only state 
> false 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14832) RBF : Add Icon for ReadOnly False

2019-09-23 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14832:
-
Attachment: HDFS-14832.after.JPG

> RBF : Add Icon for ReadOnly False
> -
>
> Key: HDFS-14832
> URL: https://issues.apache.org/jira/browse/HDFS-14832
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: HDFS-14832.001.patch, HDFS-14832.002.patch, 
> HDFS-14832.003.patch, HDFS-14832.after.JPG, ReadWrite.After.JPG, 
> ReadWrite.before.JPG, Screenshot from 2019-09-18 23-55-17.png
>
>
> In Router Web UI for Mount Table information , add icon for read only state 
> false 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936374#comment-16936374
 ] 

Siddharth Wagle commented on HDDS-1868:
---

[~ljain] Can you please review new changes in this patch and the RATIS one? I 
will write a unit test thereafter.

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-09-23 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1868:
--
Attachment: HDDS-1868.04.patch

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch
>
>
> Ozone pipeline on restart start in allocated state, they are moved into open 
> state after all the pipeline have reported to it. However this potentially 
> can lead into an issue where the pipeline is still not ready to accept any 
> incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317143=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317143
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 24/Sep/19 03:43
Start Date: 24/Sep/19 03:43
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1510: HDDS-2170. 
Add Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510#discussion_r327415701
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
 ##
 @@ -116,6 +116,11 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 Collection ozAdmins = ozoneManager.getOzoneAdmins();
 try {
   omVolumeArgs = OmVolumeArgs.getFromProtobuf(volumeInfo);
+  // when you create a volume, we set both Object ID and update ID to the
+  // same ratis transaction ID. The Object ID will never change, but update
+  // ID will be set to transactionID each time we update the object.
 
 Review comment:
   yes, we need to. I wanted to make smaller patches. I will file and post the 
next patch as soon as this is committed. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317143)
Time Spent: 50m  (was: 40m)

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=317140=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317140
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 24/Sep/19 03:29
Start Date: 24/Sep/19 03:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1469: HDDS-2034. Async 
RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-534372692
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 86 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | -1 | mvninstall | 29 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | mvninstall | 27 | hadoop-ozone in HDDS-1564 failed. |
   | -1 | compile | 19 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | compile | 13 | hadoop-ozone in HDDS-1564 failed. |
   | +1 | checkstyle | 61 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 941 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | javadoc | 16 | hadoop-ozone in HDDS-1564 failed. |
   | 0 | spotbugs | 1025 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 28 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | findbugs | 17 | hadoop-ozone in HDDS-1564 failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 25 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | cc | 21 | hadoop-hdds in the patch failed. |
   | -1 | cc | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 25 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) |
   | +1 | checkstyle | 27 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 780 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2529 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1469 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux ca24f266e7b3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 7b5a5fe |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1469/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317139=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317139
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 24/Sep/19 03:16
Start Date: 24/Sep/19 03:16
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1510: HDDS-2170. 
Add Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510#discussion_r327411316
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
 ##
 @@ -116,6 +116,11 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 Collection ozAdmins = ozoneManager.getOzoneAdmins();
 try {
   omVolumeArgs = OmVolumeArgs.getFromProtobuf(volumeInfo);
+  // when you create a volume, we set both Object ID and update ID to the
+  // same ratis transaction ID. The Object ID will never change, but update
+  // ID will be set to transactionID each time we update the object.
 
 Review comment:
   Do we need to add handling for updateID change in some volume metadata 
operations such as set/add/remove ACL, addMetadata, etc 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317139)
Time Spent: 40m  (was: 0.5h)

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936336#comment-16936336
 ] 

Hadoop QA commented on HDDS-2169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m 
33s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| 

[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936328#comment-16936328
 ] 

Dinesh Chitlangia commented on HDFS-14859:
--

[~smajeti] The tests will be triggered automatically in time and will always 
pick the latest version of patch.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936327#comment-16936327
 ] 

Hadoop QA commented on HDFS-14850:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  8s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerWebServerWithRandomSecret |
|   | hadoop.fs.http.server.TestHttpFSServerNoACLs |
|   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServer |
|   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerNoXAttrs |
|   | hadoop.lib.service.hadoop.TestFileSystemAccessService |
|   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServerWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1298/HDFS-14850.004%282%29.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d27f0aa0f4f6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936326#comment-16936326
 ] 

Hadoop QA commented on HDFS-14859:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 13 new + 3 unchanged - 0 fixed = 16 total (was 3) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
46s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981113/HDFS-14859.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cc62d966ac7f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e8e7d7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27946/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27946/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27946/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936324#comment-16936324
 ] 

Srinivasu Majeti commented on HDFS-14859:
-

I have corrected now after making sure compilation went fine 
[~dineshchitlangia] . Do you have access to re-run the tests ?

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasu Majeti updated HDFS-14859:

Attachment: HDFS-14859.006.patch

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936317#comment-16936317
 ] 

Dinesh Chitlangia commented on HDFS-14859:
--

[~smajeti] On checking the details of patch-compile failure, it looks like 
there are compilation issues:

{{[ERROR] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java:[393,4]
 error: cannot find symbol}}

 

Are you able to build cleanly in your local set up?

 

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Tsz Wo Nicholas Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2169 started by Tsz Wo Nicholas Sze.
-
> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o2169_20190923.patch
>
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Tsz Wo Nicholas Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-2169:
--
Status: Patch Available  (was: Open)

o2169_20190923.patch: 1st patch.

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o2169_20190923.patch
>
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Tsz Wo Nicholas Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2169 stopped by Tsz Wo Nicholas Sze.
-
> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o2169_20190923.patch
>
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Tsz Wo Nicholas Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-2169:
--
Attachment: o2169_20190923.patch

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o2169_20190923.patch
>
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936307#comment-16936307
 ] 

Srinivasu Majeti commented on HDFS-14859:
-

Thanks [~dineshchitlangia], I have uploaded modified version 5 of the patch. Is 
there something I need to do for the errors seen in Patch Compile Tests ?

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasu Majeti updated HDFS-14859:

Attachment: HDFS-14859.005.patch

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasu Majeti updated HDFS-14859:

Attachment: HDFS-14859.004.patch

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936122#comment-16936122
 ] 

Dinesh Chitlangia edited comment on HDFS-14859 at 9/24/19 2:00 AM:
---

[~smajeti] Thanks for quick turn around. I noticed the old variable names in 
the comments in BlockManagerSafeMode.java -L573- & 576 in your patch.

Either you can fix this and any other such occurrences in this file in the 
patch itself or whoever is going to commit this can fix it during commit. I 
leave it up to you/committer :)

I usually like to fix it in patch, makes it easy for the committer.


was (Author: dineshchitlangia):
[~smajeti] Thanks for quick turn around. I noticed the old variable names in 
the comments in BlockManagerSafeMode.java L573 & 576 in your patch.

Either you can fix this and any other such occurrences in this file in the 
patch itself or whoever is going to commit this can fix it during commit. I 
leave it up to you/committer :)

I usually like to fix it in patch, makes it easy for the committer.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936122#comment-16936122
 ] 

Dinesh Chitlangia edited comment on HDFS-14859 at 9/24/19 2:00 AM:
---

[~smajeti] Thanks for quick turn around. I noticed the old variable names in 
the comments in BlockManagerSafeMode.java -L573 &- 576 in your patch.

Either you can fix this and any other such occurrences in this file in the 
patch itself or whoever is going to commit this can fix it during commit. I 
leave it up to you/committer :)

I usually like to fix it in patch, makes it easy for the committer.


was (Author: dineshchitlangia):
[~smajeti] Thanks for quick turn around. I noticed the old variable names in 
the comments in BlockManagerSafeMode.java -L573- & 576 in your patch.

Either you can fix this and any other such occurrences in this file in the 
patch itself or whoever is going to commit this can fix it during commit. I 
leave it up to you/committer :)

I usually like to fix it in patch, makes it easy for the committer.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-23 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14850:
---
Attachment: HDFS-14850.004(2).patch

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=317107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317107
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 24/Sep/19 01:31
Start Date: 24/Sep/19 01:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1511: HDDS-2162. Make 
Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-534348793
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 968 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 734 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2420 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1511 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0f79de22ddec 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e8e7d7b |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Updated] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Description: 
To have a single configuration to use across OM cluster, few of the configs 
like 

OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,

OZONE_OM_KERBEROS_PRINCIPAL_KEY,

OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,

OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.

 

Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.

 

This Jira is to fix the above configs.

 

  was:
To have a single configuration to use across OM cluster, few of the configs 
like 

OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,

OZONE_OM_KERBEROS_PRINCIPAL_KEY,

OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,

OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.

 

This Jira is to fix the above configs.

 


> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=317100=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317100
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 24/Sep/19 00:49
Start Date: 24/Sep/19 00:49
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511
 
 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317100)
Remaining Estimate: 0h
Time Spent: 10m

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2162:
-
Labels: pull-request-available  (was: )

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936273#comment-16936273
 ] 

Hudson commented on HDDS-2159:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17362 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17362/])
HDDS-2159. Fix Race condition in ProfileServlet#pid. (aengineer: rev 
0a716bd3a5b38779bb07450acb3279e859bb7471)
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java


> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?focusedWorklogId=317085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317085
 ]

ASF GitHub Bot logged work on HDDS-2159:


Author: ASF GitHub Bot
Created on: 24/Sep/19 00:32
Start Date: 24/Sep/19 00:32
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1488: HDDS-2159. 
Fix Race condition in ProfileServlet#pid.
URL: https://github.com/apache/hadoop/pull/1488
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317085)
Time Spent: 1h 10m  (was: 1h)

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?focusedWorklogId=317084=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317084
 ]

ASF GitHub Bot logged work on HDDS-2159:


Author: ASF GitHub Bot
Created on: 24/Sep/19 00:32
Start Date: 24/Sep/19 00:32
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1488: HDDS-2159. Fix 
Race condition in ProfileServlet#pid.
URL: https://github.com/apache/hadoop/pull/1488#issuecomment-534337304
 
 
   I ran the checkstyle 3 times and confirmed that at this point of time this 
patch is clean of issues.
   @adoroszlai  and @xiaoyuyao  Thanks for the reviews. @hanishakoneru Thank 
you for the contribution. I have committed this patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317084)
Time Spent: 1h  (was: 50m)

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14778) BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage state is failed

2019-09-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936272#comment-16936272
 ] 

Wei-Chiu Chuang commented on HDFS-14778:


I've definitely seen something similar happened which was fixed by HDFS-9958. 
So I was curious why is this different from HDFS-9958.
Also, the test doesn't work. If I remove the fix, the test passes too.

> BlockManager findAndMarkBlockAsCorrupt adds block to the map if the Storage 
> state is failed
> ---
>
> Key: HDFS-14778
> URL: https://issues.apache.org/jira/browse/HDFS-14778
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14778.001.patch, HDFS-14778.002.patch, 
> HDFS-14778.003.patch
>
>
> Should not mark the block as corrupt if the storage state is failed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2159.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317056
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 24/Sep/19 00:09
Start Date: 24/Sep/19 00:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1510: HDDS-2170. Add 
Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510#issuecomment-534332701
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 81 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | -1 | mvninstall | 42 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. |
   | -1 | compile | 17 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 927 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1011 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 29 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 25 | hadoop-ozone in the patch failed. |
   | -1 | compile | 20 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | cc | 20 | hadoop-hdds in the patch failed. |
   | -1 | cc | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 20 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 52 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 783 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2532 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1510 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 88621073dc38 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cbe5d3 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1510/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 

[jira] [Commented] (HDFS-14868) RBF: Fix typo in TestRouterQuota

2019-09-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936260#comment-16936260
 ] 

Wei-Chiu Chuang commented on HDFS-14868:


+1

> RBF: Fix typo in TestRouterQuota
> 
>
> Key: HDFS-14868
> URL: https://issues.apache.org/jira/browse/HDFS-14868
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Trivial
> Attachments: HDFS-14868.001.patch
>
>
> There is a typo in TestRouterQuota, see the patch for detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936252#comment-16936252
 ] 

Íñigo Goiri commented on HDFS-14864:


The failed unit tests don't seem related but do you mind checking?

> DatanodeDescriptor Use Concurrent BlockingQueue
> ---
>
> Key: HDFS-14864
> URL: https://issues.apache.org/jira/browse/HDFS-14864
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14864.1.patch, HDFS-14864.2.patch, 
> HDFS-14864.3.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L104-L106
> This collection needs to be thread safe and it needs to repeatedly poll the 
> queue to drain it, so use {{BlockingQueue}} which has a {{drain()}} method 
> just for this purpose:
> {quote}
> This operation may be more efficient than repeatedly polling this queue.
> {quote}
> [https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html#drainTo(java.util.Collection,%20int)]
> Also, the collection returns 'null' if there is nothing to drain from the 
> queue.  This is a confusing and error-prone affect.  It should just return an 
> empty list.  I've also updated the code to be more consistent and to return a 
> java {{List}} in all places instead of a {{List}} in some and a native array 
> in others.  This will make the entire usage much more consistent and safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317036
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 23/Sep/19 23:27
Start Date: 23/Sep/19 23:27
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1510: HDDS-2170. Add 
Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510#issuecomment-534323764
 
 
   /label ozone
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317036)
Time Spent: 20m  (was: 10m)

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?focusedWorklogId=317035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317035
 ]

ASF GitHub Bot logged work on HDDS-2170:


Author: ASF GitHub Bot
Created on: 23/Sep/19 23:26
Start Date: 23/Sep/19 23:26
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1510: HDDS-2170. 
Add Object IDs and Update ID to Volume Object
URL: https://github.com/apache/hadoop/pull/1510
 
 
   This patch proposes to add object ID and update ID when a volume is created.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317035)
Remaining Estimate: 0h
Time Spent: 10m

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2170:
-
Labels: pull-request-available  (was: )

> Add Object IDs and Update ID to Volume Object
> -
>
> Key: HDDS-2170
> URL: https://issues.apache.org/jira/browse/HDDS-2170
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>
> This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2170:
--

 Summary: Add Object IDs and Update ID to Volume Object
 Key: HDDS-2170
 URL: https://issues.apache.org/jira/browse/HDDS-2170
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14849:
---
Component/s: erasure-coding
 ec

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2019-09-23 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13732:
---
Component/s: ec

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec, erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?focusedWorklogId=317030=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317030
 ]

ASF GitHub Bot logged work on HDDS-2168:


Author: ASF GitHub Bot
Created on: 23/Sep/19 23:13
Start Date: 23/Sep/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1509: HDDS-2168. 
TestOzoneManagerDoubleBufferWithOMResponse sometimes fails…
URL: https://github.com/apache/hadoop/pull/1509#issuecomment-534320473
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 176 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 73 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1207 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1318 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 40 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 25 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 25 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 799 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2967 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9c8ef3ecc37d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cbe5d3 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1509/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936245#comment-16936245
 ] 

Wei-Chiu Chuang commented on HDFS-14858:


{code}
public static final boolean DFS_NAMENODE_STATE_CONTEXT_ENABLED_DEFAULT = true;
{code}
but
{code}

  dfs.namenode.state.context.enabled
  false
{code}
Do you want it enabled or disabled by default? :)

The rest looks good to me other than this issue.

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936237#comment-16936237
 ] 

Hadoop QA commented on HDFS-14858:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.TestStateAlignmentContextWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981101/HDFS-14858.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 2036748b96ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c30e495 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27944/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27944/testReport/ |
| Max. process+thread count | 2632 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Moved] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-23 Thread Tsz Wo Nicholas Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze moved RATIS-688 to HDDS-2169:
-

 Component/s: (was: server)
  (was: client)
   Fix Version/s: (was: 0.4.0)
 Key: HDDS-2169  (was: RATIS-688)
Target Version/s:   (was: 0.4.0)
Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
  Issue Type: Improvement  (was: Bug)
 Project: Hadoop Distributed Data Store  (was: Ratis)

> Avoid buffer copies while submitting client requests in Ratis
> -
>
> Key: HDDS-2169
> URL: https://issues.apache.org/jira/browse/HDDS-2169
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shashikant Banerjee
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
>
> Currently, while sending write requests to Ratis from ozone, a protobuf 
> object containing data encoded  and then resultant protobuf is again 
> converted to a byteString which internally does a copy of the buffer embedded 
> inside the protobuf again so that it can be submitted over to Ratis client. 
> Again, while sending the appendRequest as well while building up the 
> appendRequestProto, it might be again copying the data. The idea here is to 
> provide client so pass the raw data(stateMachine data) separately to ratis 
> client without copying overhead. 
>  
> {code:java}
> private CompletableFuture sendRequestAsync(
> ContainerCommandRequestProto request) {
>   try (Scope scope = GlobalTracer.get()
>   .buildSpan("XceiverClientRatis." + request.getCmdType().name())
>   .startActive(true)) {
> ContainerCommandRequestProto finalPayload =
> ContainerCommandRequestProto.newBuilder(request)
> .setTraceID(TracingUtil.exportCurrentSpan())
> .build();
> boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload);
> //  finalPayload already has the byteString data embedded. 
> ByteString byteString = finalPayload.toByteString(); -> It involves a 
> copy again.
> if (LOG.isDebugEnabled()) {
>   LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest,
>   sanitizeForDebug(finalPayload));
> }
> return isReadOnlyRequest ?
> getClient().sendReadOnlyAsync(() -> byteString) :
> getClient().sendAsync(() -> byteString);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?focusedWorklogId=317014=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317014
 ]

ASF GitHub Bot logged work on HDDS-2168:


Author: ASF GitHub Bot
Created on: 23/Sep/19 22:26
Start Date: 23/Sep/19 22:26
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1509: HDDS-2168. 
TestOzoneManagerDoubleBufferWithOMResponse sometimes fails…
URL: https://github.com/apache/hadoop/pull/1509#issuecomment-534309184
 
 
   @bharatviswa504 @avijayanhwx Please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317014)
Time Spent: 0.5h  (was: 20m)

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?focusedWorklogId=317009=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317009
 ]

ASF GitHub Bot logged work on HDDS-2168:


Author: ASF GitHub Bot
Created on: 23/Sep/19 22:22
Start Date: 23/Sep/19 22:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1509: 
HDDS-2168. TestOzoneManagerDoubleBufferWithOMResponse sometimes fails…
URL: https://github.com/apache/hadoop/pull/1509
 
 
   … with out of memory error
   
   After debugging this issue with @avijayanhwx and @bharatviswa504 , found out 
that mockito was the culprit here by consuming lots of heap memory to store the 
invocations.
   
   Thanks to @avijayanhwx and @bharatviswa504 for helping! 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317009)
Remaining Estimate: 0h
Time Spent: 10m

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?focusedWorklogId=317010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317010
 ]

ASF GitHub Bot logged work on HDDS-2168:


Author: ASF GitHub Bot
Created on: 23/Sep/19 22:22
Start Date: 23/Sep/19 22:22
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1509: HDDS-2168. 
TestOzoneManagerDoubleBufferWithOMResponse sometimes fails…
URL: https://github.com/apache/hadoop/pull/1509#issuecomment-534308173
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 317010)
Time Spent: 20m  (was: 10m)

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2168:
-
Labels: pull-request-available  (was: )

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936216#comment-16936216
 ] 

Hadoop QA commented on HDFS-14864:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 128 unchanged - 3 fixed = 128 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.TestPread |
|   | hadoop.fs.contract.hdfs.TestHDFSContractPathHandle |
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981098/HDFS-14864.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f67b38d98ad7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c30e495 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27943/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27943/testReport/ |
| 

[jira] [Commented] (HDFS-14845) Request is a replay (34) error in httpfs

2019-09-23 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936213#comment-16936213
 ] 

Eric Yang commented on HDFS-14845:
--

[~Prabhu Joseph] Thank you for the patch.  Patch 004 looks good to me.  
[~aajisaka] let us know if this looks good on your end.  Thanks

> Request is a replay (34) error in httpfs
> 
>
> Key: HDFS-14845
> URL: https://issues.apache.org/jira/browse/HDFS-14845
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.3.0
> Environment: Kerberos and ZKDelgationTokenSecretManager enabled in 
> HttpFS
>Reporter: Akira Ajisaka
>Assignee: Prabhu Joseph
>Priority: Critical
> Attachments: HDFS-14845-001.patch, HDFS-14845-002.patch, 
> HDFS-14845-003.patch, HDFS-14845-004.patch
>
>
> We are facing "Request is a replay (34)" error when accessing to HDFS via 
> httpfs on trunk.
> {noformat}
> % curl -i --negotiate -u : "https://:4443/webhdfs/v1/?op=liststatus"
> HTTP/1.1 401 Authentication required
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> WWW-Authenticate: Negotiate
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 271
> HTTP/1.1 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
> level: Request is a replay (34))
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Date: Mon, 09 Sep 2019 06:00:04 GMT
> Pragma: no-cache
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> (snip)
> Set-Cookie: hadoop.auth=; Path=/; Secure; HttpOnly
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html;charset=iso-8859-1
> Content-Length: 413
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /webhdfs/v1/. Reason:
> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2168 started by Vivek Ratnavel Subramanian.

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-23 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-2168:


 Summary: TestOzoneManagerDoubleBufferWithOMResponse sometimes 
fails with out of memory error
 Key: HDDS-2168
 URL: https://issues.apache.org/jira/browse/HDDS-2168
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Manager
Affects Versions: 0.4.1
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
outofmemory exceptions at times in dev machines.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-09-23 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936192#comment-16936192
 ] 

Konstantin Shvachko commented on HDFS-14655:


+1 Yes it does work as expected now for me.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655-04.patch, HDFS-14655-05.patch, 
> HDFS-14655-06.patch, HDFS-14655-07.patch, HDFS-14655-08.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936177#comment-16936177
 ] 

Hadoop QA commented on HDFS-14284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 39s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981096/HDFS-14284.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31b12bca9121 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c30e495 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27942/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27942/testReport/ |
| Max. process+thread count | 1578 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| 

[jira] [Commented] (HDDS-2160) Add acceptance test for ozonesecure-mr compose

2019-09-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936174#comment-16936174
 ] 

Hudson commented on HDDS-2160:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17361 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17361/])
HDDS-2160. Add acceptance test for ozonesecure-mr compose. Contributed 
(aengineer: rev 6cbe5d38098f056cdeb063b0620e0ccff105064b)
* (edit) hadoop-ozone/dist/src/main/smoketest/mapreduce.robot
* (add) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/test.sh
* (add) hadoop-ozone/dist/src/main/smoketest/kinit-hadoop.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/kinit.robot


> Add acceptance test for ozonesecure-mr compose
> --
>
> Key: HDDS-2160
> URL: https://issues.apache.org/jira/browse/HDDS-2160
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This will give us coverage of running basic MR jobs on security enabled OZONE 
> cluster against YARN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2160) Add acceptance test for ozonesecure-mr compose

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2160:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the contribution. I have committed this patch to the 
trunk.

> Add acceptance test for ozonesecure-mr compose
> --
>
> Key: HDDS-2160
> URL: https://issues.apache.org/jira/browse/HDDS-2160
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This will give us coverage of running basic MR jobs on security enabled OZONE 
> cluster against YARN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2160) Add acceptance test for ozonesecure-mr compose

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2160?focusedWorklogId=316962=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316962
 ]

ASF GitHub Bot logged work on HDDS-2160:


Author: ASF GitHub Bot
Created on: 23/Sep/19 20:28
Start Date: 23/Sep/19 20:28
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1490: HDDS-2160. 
Add acceptance test for ozonesecure-mr compose. Contribute…
URL: https://github.com/apache/hadoop/pull/1490
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316962)
Time Spent: 40m  (was: 0.5h)

> Add acceptance test for ozonesecure-mr compose
> --
>
> Key: HDDS-2160
> URL: https://issues.apache.org/jira/browse/HDDS-2160
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This will give us coverage of running basic MR jobs on security enabled OZONE 
> cluster against YARN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936165#comment-16936165
 ] 

Hudson commented on HDDS-2161:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17360 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17360/])
HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable 
(aengineer: rev 3fd3d746fc4033cb4ab2265c7b9c9aaf8b39c10c)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/RepeatedOmKeyInfoCodec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/RepeatedOmKeyInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java


> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936163#comment-16936163
 ] 

Anu Engineer commented on HDDS-2161:


[~dineshchitlangia]  Thank you for the contribution. I have committed this 
patch to the trunk.

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2161:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316958
 ]

ASF GitHub Bot logged work on HDDS-2161:


Author: ASF GitHub Bot
Created on: 23/Sep/19 20:22
Start Date: 23/Sep/19 20:22
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1491: HDDS-2161. Create 
RepeatedKeyInfo structure to be saved in deletedTable
URL: https://github.com/apache/hadoop/pull/1491#issuecomment-534268476
 
 
   @dineshchitlangia  Thank you for the contribution. I have committed this 
patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316958)
Time Spent: 2.5h  (was: 2h 20m)

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316959
 ]

ASF GitHub Bot logged work on HDDS-2161:


Author: ASF GitHub Bot
Created on: 23/Sep/19 20:22
Start Date: 23/Sep/19 20:22
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1491: HDDS-2161. 
Create RepeatedKeyInfo structure to be saved in deletedTable
URL: https://github.com/apache/hadoop/pull/1491
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316959)
Time Spent: 2h 40m  (was: 2.5h)

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-23 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936146#comment-16936146
 ] 

Chen Liang commented on HDFS-14858:
---

Post v002 patch to rebase. Alsoe added the key to hdfs-default as [~ayushtkn] 
suggested.

[~jojochuang] I don't think this has to be a blocker because this is a minor 
performance optimization and we haven't really observed any actual performance 
impact even without this fix. The goal is mostly along the line to isolate 
overhead of CRS from non-CRS usage. HDFS-14822 is the one that is more 
impactful one in terms performance. Please let me know if you still think this 
one should be marked as blocker.

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936144#comment-16936144
 ] 

Hadoop QA commented on HDFS-14859:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 13 new + 3 unchanged - 0 fixed = 16 total (was 3) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981095/HDFS-14859.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e251c90974ed 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c30e495 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27941/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27941/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27941/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Updated] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-23 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14858:
--
Attachment: HDFS-14858.002.patch

> [SBN read] Allow configurably enable/disable AlignmentContext on NameNode
> -
>
> Key: HDFS-14858
> URL: https://issues.apache.org/jira/browse/HDFS-14858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14858.001.patch, HDFS-14858.002.patch
>
>
> As brought up under HDFS-14277, we should make sure SBN read has no 
> performance impact when it is not enabled. One potential overhead of SBN read 
> is maintaining and updating additional state status on NameNode. 
> Specifically, this is done by creating/updating/checking a 
> {{GlobalStateIdContext}} instance. Currently, even without enabling SBN read, 
> this logic is still be checked.  We can make this configurable so that when 
> SBN read is not enabled, there is no such overhead and everything works as-is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14860) Clean Up StoragePolicySatisfyManager.java

2019-09-23 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936141#comment-16936141
 ] 

David Mollitor commented on HDFS-14860:
---

I'll submit as many times at it takes :)

> Clean Up StoragePolicySatisfyManager.java
> -
>
> Key: HDFS-14860
> URL: https://issues.apache.org/jira/browse/HDFS-14860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14860.1.patch, HDFS-14860.2.patch, 
> HDFS-14860.3.patch
>
>
> * Remove superfluous debug log guards
> * Use {{java.util.concurrent}} package for internal structure instead of 
> external synchronization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14860) Clean Up StoragePolicySatisfyManager.java

2019-09-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14860:
--
Attachment: HDFS-14860.3.patch

> Clean Up StoragePolicySatisfyManager.java
> -
>
> Key: HDFS-14860
> URL: https://issues.apache.org/jira/browse/HDFS-14860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14860.1.patch, HDFS-14860.2.patch, 
> HDFS-14860.3.patch
>
>
> * Remove superfluous debug log guards
> * Use {{java.util.concurrent}} package for internal structure instead of 
> external synchronization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14860) Clean Up StoragePolicySatisfyManager.java

2019-09-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14860:
--
Status: Open  (was: Patch Available)

> Clean Up StoragePolicySatisfyManager.java
> -
>
> Key: HDFS-14860
> URL: https://issues.apache.org/jira/browse/HDFS-14860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14860.1.patch, HDFS-14860.2.patch
>
>
> * Remove superfluous debug log guards
> * Use {{java.util.concurrent}} package for internal structure instead of 
> external synchronization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14864:
--
Attachment: HDFS-14864.3.patch

> DatanodeDescriptor Use Concurrent BlockingQueue
> ---
>
> Key: HDFS-14864
> URL: https://issues.apache.org/jira/browse/HDFS-14864
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14864.1.patch, HDFS-14864.2.patch, 
> HDFS-14864.3.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L104-L106
> This collection needs to be thread safe and it needs to repeatedly poll the 
> queue to drain it, so use {{BlockingQueue}} which has a {{drain()}} method 
> just for this purpose:
> {quote}
> This operation may be more efficient than repeatedly polling this queue.
> {quote}
> [https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html#drainTo(java.util.Collection,%20int)]
> Also, the collection returns 'null' if there is nothing to drain from the 
> queue.  This is a confusing and error-prone affect.  It should just return an 
> empty list.  I've also updated the code to be more consistent and to return a 
> java {{List}} in all places instead of a {{List}} in some and a native array 
> in others.  This will make the entire usage much more consistent and safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14864:
--
Status: Patch Available  (was: Open)

Same patch.  Different name to kick off CI.

> DatanodeDescriptor Use Concurrent BlockingQueue
> ---
>
> Key: HDFS-14864
> URL: https://issues.apache.org/jira/browse/HDFS-14864
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14864.1.patch, HDFS-14864.2.patch, 
> HDFS-14864.3.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L104-L106
> This collection needs to be thread safe and it needs to repeatedly poll the 
> queue to drain it, so use {{BlockingQueue}} which has a {{drain()}} method 
> just for this purpose:
> {quote}
> This operation may be more efficient than repeatedly polling this queue.
> {quote}
> [https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html#drainTo(java.util.Collection,%20int)]
> Also, the collection returns 'null' if there is nothing to drain from the 
> queue.  This is a confusing and error-prone affect.  It should just return an 
> empty list.  I've also updated the code to be more consistent and to return a 
> java {{List}} in all places instead of a {{List}} in some and a native array 
> in others.  This will make the entire usage much more consistent and safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14864) DatanodeDescriptor Use Concurrent BlockingQueue

2019-09-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14864:
--
Status: Open  (was: Patch Available)

> DatanodeDescriptor Use Concurrent BlockingQueue
> ---
>
> Key: HDFS-14864
> URL: https://issues.apache.org/jira/browse/HDFS-14864
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HDFS-14864.1.patch, HDFS-14864.2.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L104-L106
> This collection needs to be thread safe and it needs to repeatedly poll the 
> queue to drain it, so use {{BlockingQueue}} which has a {{drain()}} method 
> just for this purpose:
> {quote}
> This operation may be more efficient than repeatedly polling this queue.
> {quote}
> [https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html#drainTo(java.util.Collection,%20int)]
> Also, the collection returns 'null' if there is nothing to drain from the 
> queue.  This is a confusing and error-prone affect.  It should just return an 
> empty list.  I've also updated the code to be more consistent and to return a 
> java {{List}} in all places instead of a {{List}} in some and a native array 
> in others.  This will make the entire usage much more consistent and safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14284) RBF: Log Router identifier when reporting exceptions

2019-09-23 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14284:
-
Attachment: HDFS-14284.001.patch
Status: Patch Available  (was: Open)

> RBF: Log Router identifier when reporting exceptions
> 
>
> Key: HDFS-14284
> URL: https://issues.apache.org/jira/browse/HDFS-14284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14284.001.patch
>
>
> The typical setup is to use multiple Routers through 
> ConfiguredFailoverProxyProvider.
> In a regular HA Namenode setup, it is easy to know which NN was used.
> However, in RBF, any Router can be the one reporting the exception and it is 
> hard to know which was the one.
> We should have a way to identify which Router/Namenode was the one triggering 
> the exception.
> This would also apply with Observer Namenodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936128#comment-16936128
 ] 

Hadoop QA commented on HDFS-14859:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 13 new + 3 unchanged - 0 fixed = 16 total (was 3) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
21s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981092/HDFS-14859.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06bd1b0e279f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c30e495 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27940/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27940/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27940/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936122#comment-16936122
 ] 

Dinesh Chitlangia commented on HDFS-14859:
--

[~smajeti] Thanks for quick turn around. I noticed the old variable names in 
the comments in BlockManagerSafeMode.java L573 & 576 in your patch.

Either you can fix this and any other such occurrences in this file in the 
patch itself or whoever is going to commit this can fix it during commit. I 
leave it up to you/committer :)

I usually like to fix it in patch, makes it easy for the committer.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936115#comment-16936115
 ] 

Srinivasu Majeti commented on HDFS-14859:
-

Thank you for the quick review [~dineshchitlangia] .I have uploaded another 
version of the modified patch per your comments.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasu Majeti updated HDFS-14859:

Attachment: HDFS-14859.003.patch

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936106#comment-16936106
 ] 

Dinesh Chitlangia edited comment on HDFS-14859 at 9/23/19 6:34 PM:
---

[~smajeti] Thank you for working on this!

Since we are on the path of improving readability, could you please rename the 
variables:

{{blockThresholdMet}} -> {{isBlockThresholdMet}}

{{datanodeThresholdMet}} -> {{isDatanodeThresholdMet}}

 

Other than that, it LGTM.


was (Author: dineshchitlangia):
[~smajeti] Thank you for working on this!

Since we are on the path of improving readability, could you please rename the 
variables:

blockThresholdMet -> isBlockThresholdMet

datanodeThresholdMet -> isDatanodeThresholdMet

 

Other than that, it LGTM.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936106#comment-16936106
 ] 

Dinesh Chitlangia commented on HDFS-14859:
--

[~smajeti] Thank you for working on this!

Since we are on the path of improving readability, could you please rename the 
variables:

blockThresholdMet -> isBlockThresholdMet

datanodeThresholdMet -> isDatanodeThresholdMet

 

Other than that, it LGTM.

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936092#comment-16936092
 ] 

Íñigo Goiri commented on HDFS-14859:


I know usually the recommendation is to have a single return but I think this 
is short enough for us to do:
{code}
synchronized (this) {
  if (blockSafe < blockThreshold) {
return false;
  }
  if (datanodeThreshold > 0) {
int datanodeNum = 
blockManager.getDatanodeManager().getNumLiveDataNodes();
if (datanodeNum < datanodeThreshold) {
  return false;
}
  }
  return true;
}
{code}
(Please check that the condition is correct, I may have messed up the order or 
the <.)

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936091#comment-16936091
 ] 

Srinivasu Majeti commented on HDFS-14859:
-

I understand your point [~goiri] . I have modified and uploaded the patch with 
better readability I hope. Hi [~arp], I have now added some test cases also to 
verify safemode impact based on datanode threshold configured or not (with 8 
possible combinations).

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14859) Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-09-23 Thread Srinivasu Majeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivasu Majeti updated HDFS-14859:

Attachment: HDFS-14859.002.patch

> Prevent Un-necessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> 
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=316889=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316889
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 23/Sep/19 18:11
Start Date: 23/Sep/19 18:11
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1489: HDDS-2019. 
Handle Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489#discussion_r327256562
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -33,4 +42,29 @@ public static String getVolumeName(String userName) {
 Objects.requireNonNull(userName);
 return DigestUtils.md5Hex(userName);
   }
+
+  /**
+   * Generate service Name for token.
+   * @param configuration
+   * @param serviceId - ozone manager service ID
+   * @param omNodeIds - list of node ids for the given OM service.
+   * @return service Name.
+   */
+  public static String buildServiceNameForToken(
 
 Review comment:
   Can we add some unit tests for this?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 316889)
Time Spent: 1h 20m  (was: 1h 10m)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >