[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851541#comment-16851541
 ] 

Hadoop QA commented on HDFS-14508:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
24s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
10s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
26s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14508 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970261/HDFS-14508-HDFS-13891.1.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d0b428ef4d7c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 04977cc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26866/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26866/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26866/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1373 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 

[jira] [Commented] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851537#comment-16851537
 ] 

Akira Ajisaka commented on HDFS-13654:
--

Thanks [~tasanuma] for the patch. Two comments:
* Would you add {{@Override}} to 
TestHttpFSServerWebServerWithRandomSecret#beforeClass?
* In the beforeClass method, you need to handle Windows environment as 
HDFS-14049 did.

> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851526#comment-16851526
 ] 

Akira Ajisaka commented on HDFS-10210:
--

Rebased the patch. Hi [~jojochuang] and [~ste...@apache.org], would you review 
this?

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch, 
> HDFS-10210.003.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10210) Remove the defunct startKdc profile from hdfs

2019-05-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10210:
-
Attachment: HDFS-10210.003.patch

> Remove the defunct startKdc profile from hdfs
> -
>
> Key: HDFS-10210
> URL: https://issues.apache.org/jira/browse/HDFS-10210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-10210.001.patch, HDFS-10210.002.patch, 
> HDFS-10210.003.patch
>
>
> This is the corresponding HDFS jira of HADOOP-12948.
> The startKdc profile introduced in HDFS-3016 is broken, and is actually no 
> longer used at all. 
> Let's remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-29 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851521#comment-16851521
 ] 

Takanobu Asanuma commented on HDFS-13654:
-

Upload the 4th patch to resolve conflicts.

> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-29 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13654:

Attachment: HDFS-13654.4.patch

> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed all the way through to branch-2.8.
Thanks for reporting the issue [~shenyinjie] and thanks for offering a fix 
[~ayushtkn]

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.8.patch, HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Fix Version/s: 2.8.6

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.8.patch, HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Fix Version/s: 2.9.3

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.8.patch, HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250705
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 04:20
Start Date: 30/May/19 04:20
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497196689
 
 
   Last commit change is moved all the classes to package named bucket under 
request/response.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250705)
Time Spent: 9h 20m  (was: 9h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Attachment: HDFS-14512.branch-2.8.patch

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.8.patch, HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   3.0.4
   2.10.0

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14512:
---
Attachment: HDFS-14512.branch-2.patch

> ONE_SSD policy will be violated while write data with 
> DistributedFileSystem.create(favoredNodes)
> 
>
> Key: HDFS-14512
> URL: https://issues.apache.org/jira/browse/HDFS-14512
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14512-01.patch, HDFS-14512-02.patch, 
> HDFS-14512.branch-2.patch, TestToRepro.patch
>
>
> Reproduce steps:
> 1.setStoragePolicy ONE_SSD for a path A;
> 2. client  write data to path A by 
> DistributedFileSystem.create(...favoredNodes) and Passing parameter 
> favoredNodes
> then, three replicas of file in this path will be located in 2 SSD  and 
> 1DISK,which is violating the ONE_SSD policy. 
> Not sure am I clear?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250700
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 04:01
Start Date: 30/May/19 04:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497193840
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250700)
Time Spent: 9h 10m  (was: 9h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=250699=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250699
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 30/May/19 04:00
Start Date: 30/May/19 04:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497193667
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 43 | Maven dependency ordering for branch |
   | +1 | mvninstall | 568 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 821 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | trunk passed |
   | 0 | spotbugs | 293 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 476 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 491 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | cc | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 609 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 502 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1316 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6310 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux fb7108963f48 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/testReport/ |
   | Max. process+thread count | 4281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250699)
Time Spent: 6h 40m  (was: 6.5h)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-14522) Allow compact property description in xml in httpfs

2019-05-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851497#comment-16851497
 ] 

Akira Ajisaka commented on HDFS-14522:
--

o.a.h.lib.server.Server calls ConfigurationUtils.load(Configuration, 
InputStrem) and it does not support compact property. IMO, replacing the call 
with Configuration.addResource(String fileName) should work.

> Allow compact property description in xml in httpfs
> ---
>
> Key: HDFS-14522
> URL: https://issues.apache.org/jira/browse/HDFS-14522
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Akira Ajisaka
>Priority: Major
>
> HADOOP-6964 allowed compact property description in Hadoop configuration, 
> however, it is not allowed in httpfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14522) Allow compact property description in xml in httpfs

2019-05-29 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-14522:


 Summary: Allow compact property description in xml in httpfs
 Key: HDFS-14522
 URL: https://issues.apache.org/jira/browse/HDFS-14522
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Reporter: Akira Ajisaka


HADOOP-6964 allowed compact property description in Hadoop configuration, 
however, it is not allowed in httpfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14514:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~smeng] [~sodonnell] for the work. I'll resolve it for now. If you 
think we should cherry pick the change into branch-2.8, let me know and we can 
reopen to work on that.

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851494#comment-16851494
 ] 

Wei-Chiu Chuang commented on HDFS-14514:


I am not sure if this is a bug prior to branch-2.9. HDFS-11402 was resolved in 
2.9.0 and later. Prior to 2.9.0, we don't even support immutable snapshots and 
fixing this bug doesn't make much sense.

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14514:
---
Fix Version/s: 2.9.3
   2.10.0

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1581?focusedWorklogId=250681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250681
 ]

ASF GitHub Bot logged work on HDDS-1581:


Author: ASF GitHub Bot
Created on: 30/May/19 03:02
Start Date: 30/May/19 03:02
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #872: HDDS-1581. 
Atleast one of the metadata dir config property must be tagged as REQUIRED
URL: https://github.com/apache/hadoop/pull/872#issuecomment-497184181
 
 
   @xiaoyuyao Please help to review/commit. Thanks for guidance.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250681)
Time Spent: 20m  (was: 10m)

> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> As advised by [~xyao], we are going with Option 1 and adding more details to 
> description of other metadata related properties that would use 
> ozone.metadata.dirs as a fallback.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1581?focusedWorklogId=250679=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250679
 ]

ASF GitHub Bot logged work on HDDS-1581:


Author: ASF GitHub Bot
Created on: 30/May/19 03:01
Start Date: 30/May/19 03:01
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #872: 
HDDS-1581. Atleast one of the metadata dir config property must be tagged as 
REQUIRED
URL: https://github.com/apache/hadoop/pull/872
 
 
   Added REQUIRED tag on fallback property and updated description of other 
configs as needed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250679)
Time Spent: 10m
Remaining Estimate: 0h

> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> As advised by [~xyao], we are going with Option 1 and adding more details to 
> description of other metadata related properties that would use 
> ozone.metadata.dirs as a fallback.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1581:
-
Labels: configuration pull-request-available  (was: configuration)

> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration, pull-request-available
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> As advised by [~xyao], we are going with Option 1 and adding more details to 
> description of other metadata related properties that would use 
> ozone.metadata.dirs as a fallback.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-1581:

Description: 
This issue was discovered while working on HDDS-373 to generate a minimal 
ozone-site.xml with required values.

{panel:title=ozone-default.xml}

ozone.metadata.dirs

OZONE, OM, SCM, CONTAINER, STORAGE

  This setting is the fallback location for SCM, OM and DataNodes
  to store their metadata. This setting may be used in test/PoC clusters
  to simplify configuration.

  For production clusters or any time you care about performance, it is
  recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
  dfs.container.ratis.datanode.storage.dir be configured separately.

  
{panel}

However, none of the properties listed above are tagged as REQUIRED.

For starters, as the goal of HDDS-373 is to generate a simple minimal 
ozone-site.xml that can be used to start ozone, I propose that we do either of 
the following:
1. Tag ozone.metadata.dirs as REQUIRED 
OR
2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
dfs.container.ratis.datanode.storage.dir as REQUIRED

For simplicity, I would prefer option 1 as that is the fallback config. We have 
already stated that for production use, we must defined the granular properties 
instead of relying on this fallback property ozone.metadata.dirs

As advised by [~xyao], we are going with Option 1 and adding more details to 
description of other metadata related properties that would use 
ozone.metadata.dirs as a fallback.

  was:
This issue was discovered while working on HDDS-373 to generate a minimal 
ozone-site.xml with required values.

{panel:title=ozone-default.xml}

ozone.metadata.dirs

OZONE, OM, SCM, CONTAINER, STORAGE

  This setting is the fallback location for SCM, OM and DataNodes
  to store their metadata. This setting may be used in test/PoC clusters
  to simplify configuration.

  For production clusters or any time you care about performance, it is
  recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
  dfs.container.ratis.datanode.storage.dir be configured separately.

  
{panel}

However, none of the properties listed above are tagged as REQUIRED.

For starters, as the goal of HDDS-373 is to generate a simple minimal 
ozone-site.xml that can be used to start ozone, I propose that we do either of 
the following:
1. Tag ozone.metadata.dirs as REQUIRED 
OR
2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
dfs.container.ratis.datanode.storage.dir as REQUIRED

For simplicity, I would prefer option 1 as that is the fallback config. We have 
already stated that for production use, we must defined the granular properties 
instead of relying on this fallback property ozone.metadata.dirs

cc: [~anu], [~arp], [~ajayydv] could you share your 2 cents pls? :)


> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> As advised by [~xyao], we are going with Option 1 and adding more details to 
> description of other metadata related properties that would use 
> ozone.metadata.dirs as a fallback.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-29 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851476#comment-16851476
 ] 

Takanobu Asanuma commented on HDFS-14508:
-

Uploaded the 1st patch.
 * On second thought, creating a common interface may not be proper if we keep 
the same interface as NameNode in Router, because that means NameNode interface 
is always common. So same as the existing way, I want to keep 
NamenodeBeanMetrics to offer NameNode metrics.

 * Instead of creating another metrics service, the patch creates a new 
bean({{RouterMBean}}) for router specific metrics, and {{FederationMetrics}} 
implements it. I think this way is easy and more natural.

 * Moved router specific metrics from {{FederationMBean}} to {{RouterMBean}}.

Kindly help to review it.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-29 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14508:

Status: Patch Available  (was: Open)

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-29 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14508:

Attachment: HDFS-14508-HDFS-13891.1.patch

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1612) Add shell command to print topology

2019-05-29 Thread Sammi Chen (JIRA)
Sammi Chen created HDDS-1612:


 Summary: Add shell command to print topology 
 Key: HDDS-1612
 URL: https://issues.apache.org/jira/browse/HDDS-1612
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-05-29 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850757#comment-16850757
 ] 

Sammi Chen edited comment on HDFS-14356 at 5/30/19 2:41 AM:


Thanks [~PhiloHe] for the update. A few comments,
1. typo "amd" in  "Please refer to http://pmem.io/ amd 
https://github.com/pmem/pmdk;
2.  "It is recommended to build the project with this option if user plans to 
use SCM backed
HDFS cache."  name the reason why it's recommended, "for better 
performance"? 
3. for log the mapper block loader, try "this.class.getClass()" in super class, 
instead of use instanceof sub class name.
4.  if (!(cacheLoader instanceof NativePmemMappableBlockLoader)) {
  return -1;
}
Suggest add a new check function like "isNative" in MappableBlockLoader, 
instead of use instanceof to keep FsDatasetCache kind of cache loader 
implementation nutral. Same suggestion for the NativePmemMappedBlock. You can 
add a new function, such as "getMappedAddress"
5.  Suggest change the log level to debug or trace.
   LOG.info("Get InputStream by cache address."); 
6. several typo "pmemMappedAddres" in NativePmemMappedBlock
7.  4 space indention for the second line. And double check other functions 
too. 
NativePmemMappedBlock(long pmemMappedAddres, long length,
ExtendedBlockId key) {
8. unnecessary black line in native code, such as 
+JNIEnv *env, jclass thisClass, jstring filePath, jlong fileLength) {
+  #if (defined UNIX) && (defined HADOOP_PMDK_LIBRARY)
+
+/* create a pmem file and memory map it */


was (Author: sammi):
Thanks [~PhiloHe] for the update. A few comments,
1. typo "amd" in  "Please refer to http://pmem.io/ amd 
https://github.com/pmem/pmdk;
2.  "It is recommended to build the project with this option if user plans to 
use SCM backed
HDFS cache."  name the reason why it's recommended, "for better 
performance"? 
3. for log the mapper block loader, try "this.class.getClass()" in super class, 
instead of use instanceof sub class name.
4.  if (!(cacheLoader instanceof NativePmemMappableBlockLoader)) {
  return -1;
}
Suggest add a new check function like "isNative" in MappableBlockLoader, 
instead of use instanceof to keep FsDatasetCache kind of cache loader 
implementation nutral. Same suggestion for the NativePmemMappedBlock. You can 
add a new function, such as "getMappedAddress"
5.  Suggest change the log level to debug or trace.
   LOG.info("Get InputStream by cache address."); 
6. several typo "pmemMappedAddres" in NativePmemMappedBlock
7.  4 space indention for the second line. And double check other functions 
too. 
NativePmemMappedBlock(long pmemMappedAddres, long length,
ExtendedBlockId key) {

> Implement HDFS cache on SCM with native PMDK libs
> -
>
> Key: HDFS-14356
> URL: https://issues.apache.org/jira/browse/HDFS-14356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14356.000.patch, HDFS-14356.001.patch, 
> HDFS-14356.002.patch, HDFS-14356.003.patch, HDFS-14356.004.patch, 
> HDFS-14356.005.patch, HDFS-14356.006.patch
>
>
> In this implementation, native PMDK libs are used to map HDFS blocks to SCM. 
> To use this implementation, user should build hadoop with PMDK libs by 
> specifying a build option. This implementation is only supported on linux 
> platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851473#comment-16851473
 ] 

Dinesh Chitlangia commented on HDDS-1581:
-

[~xyao] Thanks. I will log a patch shortly.

> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> cc: [~anu], [~arp], [~ajayydv] could you share your 2 cents pls? :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250673=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250673
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 02:03
Start Date: 30/May/19 02:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497172980
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 279 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 292 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 477 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | compile | 259 | the patch passed |
   | +1 | cc | 259 | the patch passed |
   | +1 | javac | 259 | the patch passed |
   | -0 | checkstyle | 47 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 243 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1283 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6371 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 9e24913aea1c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/testReport/ |
   | Max. process+thread count | 4580 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id:

[jira] [Assigned] (HDFS-14518) Optimize HDFS cache checksum and make checksum configurable

2019-05-29 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He reassigned HDFS-14518:
-

Assignee: Feilong He

> Optimize HDFS cache checksum and make checksum configurable
> ---
>
> Key: HDFS-14518
> URL: https://issues.apache.org/jira/browse/HDFS-14518
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
>
> HDFS cache checksum can be operated on cached data for verification. And we 
> can also consider to make checksum configurable, so user can shutdown 
> checksum operation when caching data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14518) Optimize HDFS cache checksum and make checksum configurable

2019-05-29 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851456#comment-16851456
 ] 

Feilong He commented on HDFS-14518:
---

Hi [~jojochuang], this Jira is common to DRAM cache and Pmem cache. So strictly 
speaking, it is not related to HDFS-13762.

> Optimize HDFS cache checksum and make checksum configurable
> ---
>
> Key: HDFS-14518
> URL: https://issues.apache.org/jira/browse/HDFS-14518
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Feilong He
>Priority: Minor
>
> HDFS cache checksum can be operated on cached data for verification. And we 
> can also consider to make checksum configurable, so user can shutdown 
> checksum operation when caching data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250672=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250672
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 01:58
Start Date: 30/May/19 01:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497172105
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 535 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 838 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 297 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 266 | the patch passed |
   | +1 | cc | 266 | the patch passed |
   | +1 | javac | 266 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 635 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 75 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 487 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 232 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1506 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6491 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 07534816c8d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/testReport/ |
   | Max. process+thread count | 3661 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250672)
Time Spent: 8h 50m  (was: 8h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> 

[jira] [Commented] (HDDS-1581) Atleast one of the metadata dir config property must be tagged as REQUIRED

2019-05-29 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851452#comment-16851452
 ] 

Xiaoyu Yao commented on HDDS-1581:
--

+1 for option 1. Option 2 is an optimization oriented for production.

We can add more details in the notes of defaults for individual keys mentioned 
in Option 2. 

 

> Atleast one of the metadata dir config property must be tagged as REQUIRED
> --
>
> Key: HDDS-1581
> URL: https://issues.apache.org/jira/browse/HDDS-1581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: configuration
>
> This issue was discovered while working on HDDS-373 to generate a minimal 
> ozone-site.xml with required values.
> {panel:title=ozone-default.xml}
> 
> ozone.metadata.dirs
> 
> OZONE, OM, SCM, CONTAINER, STORAGE
> 
>   This setting is the fallback location for SCM, OM and DataNodes
>   to store their metadata. This setting may be used in test/PoC clusters
>   to simplify configuration.
>   For production clusters or any time you care about performance, it is
>   recommended that ozone.om.db.dirs, ozone.scm.db.dirs and
>   dfs.container.ratis.datanode.storage.dir be configured separately.
> 
>   
> {panel}
> However, none of the properties listed above are tagged as REQUIRED.
> For starters, as the goal of HDDS-373 is to generate a simple minimal 
> ozone-site.xml that can be used to start ozone, I propose that we do either 
> of the following:
> 1. Tag ozone.metadata.dirs as REQUIRED 
> OR
> 2. Tag ozone.om.db.dirs, ozone.scm.db.dirs and 
> dfs.container.ratis.datanode.storage.dir as REQUIRED
> For simplicity, I would prefer option 1 as that is the fallback config. We 
> have already stated that for production use, we must defined the granular 
> properties instead of relying on this fallback property ozone.metadata.dirs
> cc: [~anu], [~arp], [~ajayydv] could you share your 2 cents pls? :)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=250670=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250670
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 30/May/19 01:45
Start Date: 30/May/19 01:45
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497169505
 
 
   @ajayydv, half of the test failures are related. Can you fix them? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250670)
Time Spent: 6.5h  (was: 6h 20m)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=250663=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250663
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 30/May/19 01:19
Start Date: 30/May/19 01:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497164591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 544 | trunk passed |
   | +1 | compile | 250 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 825 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 299 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 480 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 261 | the patch passed |
   | +1 | javac | 261 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 634 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 227 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1245 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 64 | The patch does not generate ASF License warnings. |
   | | | 6173 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 92e7bb7a36d1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/testReport/ |
   | Max. process+thread count | 5140 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250663)
Time Spent: 0.5h  (was: 20m)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA

[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=250662=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250662
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 30/May/19 01:05
Start Date: 30/May/19 01:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497162182
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 285 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 282 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 473 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 137 | the patch passed |
   | +1 | findbugs | 473 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 226 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1174 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6145 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d480a8fe46b5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9ad7cad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/testReport/ |
   | Max. process+thread count | 5043 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250662)
Time Spent: 20m  (was: 10m)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer 

[jira] [Commented] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851433#comment-16851433
 ] 

Hadoop QA commented on HDFS-14514:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HDFS-14514 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14514 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970255/HDFS-14514.branch-2.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26865/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851432#comment-16851432
 ] 

Wei-Chiu Chuang commented on HDFS-14514:


Pushed v003 to branch-3. There's a conflict in branch-2.8 so I'll wait for an 
update.

 

I used the following commit message so it will be recognized as the 
contribution by both [~smeng] and [~sodonnell] on github.
{noformat}
Co-authored-by: Stephen O'Donnell {noformat}
 

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851432#comment-16851432
 ] 

Wei-Chiu Chuang edited comment on HDFS-14514 at 5/30/19 12:38 AM:
--

Pushed v004 to branch-2. There's a conflict in branch-2.8 so I'll wait for an 
update.

 

I used the following commit message so it will be recognized as the 
contribution by both [~smeng] and [~sodonnell] on github.
{noformat}
Co-authored-by: Stephen O'Donnell {noformat}
 


was (Author: jojochuang):
Pushed v003 to branch-3, and also posted a v004 for reference. There's a 
conflict in branch-2.8 so I'll wait for an update.

 

I used the following commit message so it will be recognized as the 
contribution by both [~smeng] and [~sodonnell] on github.
{noformat}
Co-authored-by: Stephen O'Donnell {noformat}
 

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851432#comment-16851432
 ] 

Wei-Chiu Chuang edited comment on HDFS-14514 at 5/30/19 12:38 AM:
--

Pushed v003 to branch-3, and also posted a v004 for reference. There's a 
conflict in branch-2.8 so I'll wait for an update.

 

I used the following commit message so it will be recognized as the 
contribution by both [~smeng] and [~sodonnell] on github.
{noformat}
Co-authored-by: Stephen O'Donnell {noformat}
 


was (Author: jojochuang):
Pushed v003 to branch-3. There's a conflict in branch-2.8 so I'll wait for an 
update.

 

I used the following commit message so it will be recognized as the 
contribution by both [~smeng] and [~sodonnell] on github.
{noformat}
Co-authored-by: Stephen O'Donnell {noformat}
 

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14514:
---
Attachment: HDFS-14514.branch-2.004.patch

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch, 
> HDFS-14514.branch-2.004.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250652
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:19
Start Date: 30/May/19 00:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   This is added based on Arpit's comment in HDDS-1512. As we want to test OM 
Double Buffer Implementation without actual OM Responses too. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250652)
Time Spent: 8h 40m  (was: 8.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851423#comment-16851423
 ] 

Wei-Chiu Chuang commented on HDFS-14514:


+1 looks great. I'll update the patch to convert into else-if upon commit and 
upload an updated patch for future reference.

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851419#comment-16851419
 ] 

Daniel Templeton edited comment on HDFS-14514 at 5/30/19 12:17 AM:
---

LGTM.  I'd like to see the last two ifs in DFSInputStream be an if/else-if, but 
I can fix that on commit.  If there are no complaints, I'll commit this later 
this evening.


was (Author: templedf):
LGTM.  I'd like to see the last two {{if}}s in {{DFSInputStream}} be an 
{{if/else-if}}, but I can fix that on commit.  If there are no complaints, I'll 
commit this later this evening.

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250649
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:16
Start Date: 30/May/19 00:16
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497153823
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250649)
Time Spent: 8.5h  (was: 8h 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14514) Actual read size of open file in encryption zone still larger than listing size even after enabling HDFS-11402 in Hadoop 2

2019-05-29 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851419#comment-16851419
 ] 

Daniel Templeton commented on HDFS-14514:
-

LGTM.  I'd like to see the last two {{if}}s in {{DFSInputStream}} be an 
{{if/else-if}}, but I can fix that on commit.  If there are no complaints, I'll 
commit this later this evening.

> Actual read size of open file in encryption zone still larger than listing 
> size even after enabling HDFS-11402 in Hadoop 2
> --
>
> Key: HDFS-14514
> URL: https://issues.apache.org/jira/browse/HDFS-14514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs, snapshots
>Affects Versions: 2.6.5, 2.9.2, 2.8.5, 2.7.7
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14514.branch-2.001.patch, 
> HDFS-14514.branch-2.002.patch, HDFS-14514.branch-2.003.patch
>
>
> In Hadoop 2, when a file is opened for write in *encryption zone*, taken a 
> snapshot and appended, the read out file size in the snapshot is larger than 
> the listing size. This happens even when immutable snapshot HDFS-11402 is 
> enabled.
> Note: The refactor HDFS-8905 happened in Hadoop 3.0 and later fixed the bug 
> silently (probably incidentally). Hadoop 2.x are still suffering from this 
> issue.
> Thanks [~sodonnell] for locating the root cause in the codebase.
> Repro:
> 1. Set dfs.namenode.snapshot.capture.openfiles to true in hdfs-site.xml, 
> start HDFS cluster
> 2. Create an empty directory /dataenc, create encryption zone and allow 
> snapshot on it
> {code:bash}
> hadoop key create reprokey
> sudo -u hdfs hdfs dfs -mkdir /dataenc
> sudo -u hdfs hdfs crypto -createZone -keyName reprokey -path /dataenc
> sudo -u hdfs hdfs dfsadmin -allowSnapshot /dataenc
> {code}
> 3. Use a client that keeps a file open for write under /dataenc. For example, 
> I'm using Flume HDFS sink to tail a local file.
> 4. Append the file several times using the client, keep the file open.
> 5. Create a snapshot
> {code:bash}
> sudo -u hdfs hdfs dfs -createSnapshot /dataenc snap1
> {code}
> 6. Append the file one or more times, but don't let the file size exceed the 
> block size limit. Wait for several seconds for the append to be flushed to DN.
> 7. Do a -ls on the file inside the snapshot, then try to read the file using 
> -get, you should see the actual file size read is larger than the listing 
> size from -ls.
> The patch and an updated unit test will be uploaded later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14516) RBF: Create hdfs-rbf-site.xml for RBF specific properties

2019-05-29 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851416#comment-16851416
 ] 

Takanobu Asanuma commented on HDFS-14516:
-

Thanks for reviewing and committing it, [~ayushtkn] and [~elgoiri].

For the test, let's wait for a while and see how often it happens.

> RBF: Create hdfs-rbf-site.xml for RBF specific properties
> -
>
> Key: HDFS-14516
> URL: https://issues.apache.org/jira/browse/HDFS-14516
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14516.1.patch, HDFS-14516.2.patch
>
>
> Currently, users write rbf properties in {{hdfs-site.xml}} though the 
> definitions are in {{hdfs-rbf-default.xml}}. Like other modules, it would be 
> better if there is a specific configuration file, {{hdfs-rbf-site.xml}}.
> {{hdfs-rbf-default.xml}} also should be loaded when it exists in the 
> configuration directory. It is just a document at the moment.
> There is an early discussion in HDFS-13215.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250645
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:12
Start Date: 30/May/19 00:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything done for handling NotReplicatedException.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250645)
Time Spent: 8h 10m  (was: 8h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250646
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:12
Start Date: 30/May/19 00:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything being done for handling NotReplicatedException.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250646)
Time Spent: 8h 20m  (was: 8h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250642
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:10
Start Date: 30/May/19 00:10
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497152646
 
 
   Thank You @hanishakoneru for the review.
   I have addressed the review comments. For some of the questions, replied my 
answers.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250642)
Time Spent: 8h  (was: 7h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250641
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:09
Start Date: 30/May/19 00:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814908
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
 ##
 @@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.util.Time;
+
+import java.util.UUID;
+
+/**
+ * Helper class to test OMClientRequest classes.
+ */
+public final class TestOMRequestUtils {
+
+  private TestOMRequestUtils() {
+//Do nothing
+  }
+  public static void addEntryToDB(String volumeName, String bucketName,
+  OMMetadataManager omMetadataManager)
+  throws Exception {
+
+createVolumeEntryToDDB(volumeName, bucketName, omMetadataManager);
+
+OmBucketInfo omBucketInfo =
+OmBucketInfo.newBuilder().setVolumeName(volumeName)
+.setBucketName(bucketName).setCreationTime(Time.now()).build();
+
+omMetadataManager.getBucketTable().put(
+omMetadataManager.getBucketKey(volumeName, bucketName), omBucketInfo);
+  }
+
+  public static void createVolumeEntryToDDB(String volumeName,
+  String bucketName, OMMetadataManager omMetadataManager)
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250641)
Time Spent: 7h 50m  (was: 7h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250640
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:08
Start Date: 30/May/19 00:08
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814783
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.
+BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetBucketPropertyRequest;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.util.UUID;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketSetPropertyRequest class which handles OMSetBucketProperty
+ * request.
+ */
+public class TestOMBucketSetPropertyRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+Assert.assertEquals(omRequest,
+omBucketSetPropertyRequest.preExecute(ozoneManager));
+  }
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+// Create with default BucketInfo values
+TestOMRequestUtils.addEntryToDB(volumeName, bucketName, omMetadataManager);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+OMClientResponse omClientResponse =
+omBucketSetPropertyRequest.validateAndUpdateCache(ozoneManager, 1);
+
+Assert.assertEquals(true,
+omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName))
+.getIsVersionEnabled());
+
+

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250639
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:07
Start Date: 30/May/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814660
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250637
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:07
Start Date: 30/May/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   This is added based on Arpit's comment in HDDS-1512. As we want to test OM 
Double Buffer Implementation without actual OM Responses. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250637)
Time Spent: 7h 10m  (was: 7h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250638
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:07
Start Date: 30/May/19 00:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814608
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250635=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250635
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:05
Start Date: 30/May/19 00:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814215
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.KeyValueUtil;
+import org.apache.hadoop.ozone.om.helpers.OmBucketArgs;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketSetPropertyResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle SetBucketProperty Request.
+ */
+public class OMBucketSetPropertyRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketSetPropertyRequest.class);
+
+  public OMBucketSetPropertyRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+return getOmRequest();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+OMMetrics omMetrics = ozoneManager.getOmMetrics();
+
+// This will never be null, on a real Ozone cluster. For tests this might
+// be null. using mockito, to set omMetrics object, but still getting
+// null. For now added this not null check.
+if (omMetrics != null) {
+  omMetrics.incNumBucketUpdates();
+}
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250635)
Time Spent: 6h 50m  (was: 6h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250636
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:05
Start Date: 30/May/19 00:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814271
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/package-info.java
 ##
 @@ -0,0 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * This package contains classes for handling OMRequest's.
+ */
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250636)
Time Spent: 7h  (was: 6h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250634=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250634
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:04
Start Date: 30/May/19 00:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288814047
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
+newBucketInfo.setCreationTime(Time.now()));
+
+if (bucketInfo.hasBeinfo()) {
+  newBucketInfo.setBeinfo(getBeinfo(kmsProvider, bucketInfo));
+}
+
+newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+return getOmRequest().toBuilder().setCreateBucketRequest(
+newCreateBucketRequest.build()).build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumBucketCreates();
+
+

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250633=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250633
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:03
Start Date: 30/May/19 00:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813760
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250633)
Time Spent: 6.5h  (was: 6h 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250631=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250631
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:02
Start Date: 30/May/19 00:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Here ordinal gives Position, and from that position finding the value from 
Status.values()
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250631)
Time Spent: 6h 10m  (was: 6h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250632
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:02
Start Date: 30/May/19 00:02
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Here ordinal gives Position, and from that position finding the value from 
Status.values() (This return array of Status)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250632)
Time Spent: 6h 20m  (was: 6h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250629=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250629
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:01
Start Date: 30/May/19 00:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813431
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -296,11 +281,7 @@ public OmBucketInfo setBucketProperty(OmBucketArgs args) 
throws IOException {
   bucketInfoBuilder.setCreationTime(oldBucketInfo.getCreationTime());
 
   OmBucketInfo omBucketInfo = bucketInfoBuilder.build();
-
-  if (!isRatisEnabled) {
-commitSetBucketPropertyInfoToDB(omBucketInfo);
-  }
-  return omBucketInfo;
+  commitSetBucketPropertyInfoToDB(omBucketInfo);
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250629)
Time Spent: 5h 50m  (was: 5h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250630=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250630
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 00:01
Start Date: 30/May/19 00:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything is done for handling NotReplicatedException.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250630)
Time Spent: 6h  (was: 5h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250620=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250620
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288809976
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.
+BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.SetBucketPropertyRequest;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.util.UUID;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketSetPropertyRequest class which handles OMSetBucketProperty
+ * request.
+ */
+public class TestOMBucketSetPropertyRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+Assert.assertEquals(omRequest,
+omBucketSetPropertyRequest.preExecute(ozoneManager));
+  }
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+
+OMRequest omRequest = createSeBucketPropertyRequest(volumeName,
+bucketName, true);
+
+// Create with default BucketInfo values
+TestOMRequestUtils.addEntryToDB(volumeName, bucketName, omMetadataManager);
+
+OMBucketSetPropertyRequest omBucketSetPropertyRequest =
+new OMBucketSetPropertyRequest(omRequest);
+
+OMClientResponse omClientResponse =
+omBucketSetPropertyRequest.validateAndUpdateCache(ozoneManager, 1);
+
+Assert.assertEquals(true,
+omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName))
+.getIsVersionEnabled());
+
+

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250622=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250622
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288809062
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250612=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250612
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288804803
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/package-info.java
 ##
 @@ -0,0 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * This package contains classes for handling OMRequest's.
+ */
 
 Review comment:
   Typo: OMRequests
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250612)
Time Spent: 4.5h  (was: 4h 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250614=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250614
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288721948
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -296,11 +281,7 @@ public OmBucketInfo setBucketProperty(OmBucketArgs args) 
throws IOException {
   bucketInfoBuilder.setCreationTime(oldBucketInfo.getCreationTime());
 
   OmBucketInfo omBucketInfo = bucketInfoBuilder.build();
-
-  if (!isRatisEnabled) {
-commitSetBucketPropertyInfoToDB(omBucketInfo);
-  }
-  return omBucketInfo;
+  commitSetBucketPropertyInfoToDB(omBucketInfo);
 
 Review comment:
   commitSetBucketPropertyInfoToDB() just calls commitCreateBucketInfoToDB() 
without any modification. We can directly call commitCreateBucketInfoToDB() 
here (and maybe rename it to commitBucketInfoToDB ? ).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250614)
Time Spent: 4h 50m  (was: 4h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250613
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288791912
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
 
 Review comment:
   newCreateBucketInfo.setBucketInfo is done later. Should only set the 
creation time here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250613)
Time Spent: 4h 40m  (was: 4.5h)

> Implement Bucket Write Requests to use 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250617
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288810202
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
 ##
 @@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.util.Time;
+
+import java.util.UUID;
+
+/**
+ * Helper class to test OMClientRequest classes.
+ */
+public final class TestOMRequestUtils {
+
+  private TestOMRequestUtils() {
+//Do nothing
+  }
+  public static void addEntryToDB(String volumeName, String bucketName,
+  OMMetadataManager omMetadataManager)
+  throws Exception {
+
+createVolumeEntryToDDB(volumeName, bucketName, omMetadataManager);
+
+OmBucketInfo omBucketInfo =
+OmBucketInfo.newBuilder().setVolumeName(volumeName)
+.setBucketName(bucketName).setCreationTime(Time.now()).build();
+
+omMetadataManager.getBucketTable().put(
+omMetadataManager.getBucketKey(volumeName, bucketName), omBucketInfo);
+  }
+
+  public static void createVolumeEntryToDDB(String volumeName,
+  String bucketName, OMMetadataManager omMetadataManager)
 
 Review comment:
   bucketName is not used here.
   Can we rename this method to something like addVolumeToDB?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250617)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250616=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250616
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288802440
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketSetPropertyRequest.java
 ##
 @@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+import java.util.List;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.KeyValueUtil;
+import org.apache.hadoop.ozone.om.helpers.OmBucketArgs;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketSetPropertyResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+/**
+ * Handle SetBucketProperty Request.
+ */
+public class OMBucketSetPropertyRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketSetPropertyRequest.class);
+
+  public OMBucketSetPropertyRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+return getOmRequest();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+
+OMMetrics omMetrics = ozoneManager.getOmMetrics();
+
+// This will never be null, on a real Ozone cluster. For tests this might
+// be null. using mockito, to set omMetrics object, but still getting
+// null. For now added this not null check.
+if (omMetrics != null) {
+  omMetrics.incNumBucketUpdates();
+}
 
 Review comment:
   Lets add a TODO here to keep track of this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250616)
Time Spent: 5h  (was: 4h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250618
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288722111
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   How are these exceptions handled currently?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250618)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250623
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288727105
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
 ##
 @@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis.utils;
+
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.request.OMBucketCreateRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketDeleteRequest;
+import org.apache.hadoop.ozone.om.request.OMBucketSetPropertyRequest;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+
+import java.io.IOException;
+
+/**
+ * Utility class used by OzoneManager HA.
+ */
+public final class OzoneManagerRatisUtils {
+
+  private OzoneManagerRatisUtils() {
+  }
+  /**
+   * Create OMClientRequest which enacpsulates the OMRequest.
+   * @param omRequest
+   * @return OMClientRequest
+   * @throws IOException
+   */
+  public static OMClientRequest createClientRequest(OMRequest omRequest)
+  throws IOException {
+Type cmdType = omRequest.getCmdType();
+switch (cmdType) {
+case CreateBucket:
+  return new OMBucketCreateRequest(omRequest);
+case DeleteBucket:
+  return new OMBucketDeleteRequest(omRequest);
+case SetBucketProperty:
+  return new OMBucketSetPropertyRequest(omRequest);
+default:
+  // TODO: will update once all request types are implemented.
+  return null;
+}
+  }
+
+  /**
+   * Convert exception result to {@link OzoneManagerProtocolProtos.Status}.
+   * @param exception
+   * @return {@link OzoneManagerProtocolProtos.Status}
+   */
+  public static Status exceptionToResponseStatus(IOException exception) {
+if (exception instanceof OMException) {
+  return Status.values()[((OMException) exception).getResult().ordinal()];
 
 Review comment:
   Should it not be Status.valueOf()? Or does this also give the same result?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250623)
Time Spent: 5h 40m  (was: 5.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250611
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497148116
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 85 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 757 | trunk passed |
   | +1 | compile | 329 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 982 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 335 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 556 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 527 | the patch passed |
   | +1 | compile | 282 | the patch passed |
   | +1 | cc | 282 | the patch passed |
   | +1 | javac | 282 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 641 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 72 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1407 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7085 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 05439e3af54b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/testReport/ |
   | Max. process+thread count | 4948 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250611)
Time Spent: 4h 20m  (was: 4h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250619
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288807522
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -390,7 +405,11 @@ private OMBucketCreateResponse createBucket(String 
volumeName,
 OmBucketInfo omBucketInfo =
 OmBucketInfo.newBuilder().setVolumeName(volumeName)
 .setBucketName(bucketName).setCreationTime(Time.now()).build();
-return new OMBucketCreateResponse(omBucketInfo);
+return new OMBucketCreateResponse(omBucketInfo, OMResponse.newBuilder()
 
 Review comment:
   OMDummyCreateBucketResponse seems to be doing the same thing as 
OMBucketCreateResponse. Why do we need 2 different tests?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250619)
Time Spent: 5h 10m  (was: 5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250621=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250621
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288808597
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMBucketCreateRequest.java
 ##
 @@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.StorageTypeProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.util.Time;
+
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests OMBucketCreateRequest class, which handles CreateBucket request.
+ */
+public class TestOMBucketCreateRequest {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+  }
+
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+doPreExecute(volumeName, bucketName);
+  }
+
+
+  @Test
+  public void testValidateAndUpdateCache() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMBucketCreateRequest omBucketCreateRequest = doPreExecute(volumeName,
+bucketName);
+
+doValidateAndUpdateCache(volumeName, bucketName,
+omBucketCreateRequest.getOmRequest());
+
+  }
+
+  @Test
+  public void testValidateAndUpdateCacheWithNoVolume() throws Exception {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+
+OMRequest originalRequest = createBucketRequest(bucketName, volumeName,
+false, StorageTypeProto.SSD);
+
+OMBucketCreateRequest omBucketCreateRequest =
+new OMBucketCreateRequest(originalRequest);
+
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+
+// As we have not still 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250615
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:46
Start Date: 29/May/19 23:46
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288800930
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMBucketCreateRequest.java
 ##
 @@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request;
+
+import java.io.IOException;
+
+import com.google.common.base.Optional;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketEncryptionInfoProto;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.BucketInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocolPB.OMPBHelper;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CryptoProtocolVersionProto.ENCRYPTION_ZONES;
+
+/**
+ * Handles CreateBucket Request.
+ */
+public class OMBucketCreateRequest extends OMClientRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMBucketCreateRequest.class);
+
+  public OMBucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+
+// Get original request.
+CreateBucketRequest createBucketRequest =
+getOmRequest().getCreateBucketRequest();
+BucketInfo bucketInfo = createBucketRequest.getBucketInfo();
+
+// Get KMS provider.
+KeyProviderCryptoExtension kmsProvider =
+ozoneManager.getKmsProvider();
+
+// Create new Bucket request with new bucket info.
+CreateBucketRequest.Builder newCreateBucketRequest =
+createBucketRequest.toBuilder();
+
+BucketInfo.Builder newBucketInfo = bucketInfo.toBuilder();
+
+newCreateBucketRequest.setBucketInfo(
+newBucketInfo.setCreationTime(Time.now()));
+
+if (bucketInfo.hasBeinfo()) {
+  newBucketInfo.setBeinfo(getBeinfo(kmsProvider, bucketInfo));
+}
+
+newCreateBucketRequest.setBucketInfo(newBucketInfo.build());
+return getOmRequest().toBuilder().setCreateBucketRequest(
+newCreateBucketRequest.build()).build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex) {
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumBucketCreates();
+
+

[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=250606=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250606
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 29/May/19 23:21
Start Date: 29/May/19 23:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #871: 
HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250606)
Time Spent: 10m
Remaining Estimate: 0h

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1579:
-
Labels: pull-request-available  (was: )

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1579:
-
Target Version/s: 0.5.0
  Status: Patch Available  (was: Open)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-29 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-1579:


Assignee: Bharat Viswanadham

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250604=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250604
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 23:17
Start Date: 29/May/19 23:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497142393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 561 | trunk passed |
   | +1 | compile | 296 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 839 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 543 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 504 | the patch passed |
   | +1 | compile | 288 | the patch passed |
   | +1 | cc | 288 | the patch passed |
   | +1 | javac | 288 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 661 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 80 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2229 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7485 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux d715f54a540b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/testReport/ |
   | Max. process+thread count | 3668 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Commented] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851383#comment-16851383
 ] 

Hudson commented on HDDS-1530:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16626 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16626/])
HDDS-1530. Freon support big files larger than 2GB and add --bufferSize (xyao: 
rev 9ad7cad2054854c9db280f5a44616ceb5f248a24)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestRandomKeyGenerator.java


> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-29 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1530:
-
  Resolution: Fixed
   Fix Version/s: 0.4.1
Target Version/s: 0.4.1  (was: 0.5.0)
  Status: Resolved  (was: Patch Available)

Thanks [~xudongcao] for the contribution and all for the reviews. I've commit 
the patch to trunk. 

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?focusedWorklogId=250597=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250597
 ]

ASF GitHub Bot logged work on HDDS-1530:


Author: ASF GitHub Bot
Created on: 29/May/19 22:57
Start Date: 29/May/19 22:57
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #830: HDDS-1530. 
Freon support big files larger than 2GB and add --bufferSize and 
--validateWrites options.
URL: https://github.com/apache/hadoop/pull/830
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250597)
Time Spent: 5h 50m  (was: 5h 40m)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?focusedWorklogId=250591=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250591
 ]

ASF GitHub Bot logged work on HDDS-1530:


Author: ASF GitHub Bot
Created on: 29/May/19 22:56
Start Date: 29/May/19 22:56
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #830: HDDS-1530. Freon 
support big files larger than 2GB and add --bufferSize and --validateWrites 
options.
URL: https://github.com/apache/hadoop/pull/830#issuecomment-497138201
 
 
   +1, I will merge/commit this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250591)
Time Spent: 5h 40m  (was: 5.5h)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1530) Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and "--validateWrites" options.

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1530?focusedWorklogId=250589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250589
 ]

ASF GitHub Bot logged work on HDDS-1530:


Author: ASF GitHub Bot
Created on: 29/May/19 22:54
Start Date: 29/May/19 22:54
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #830: HDDS-1530. 
Freon support big files larger than 2GB and add --bufferSize and 
--validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r288799644
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -622,7 +642,11 @@ public void run() {
 try (Scope writeScope = GlobalTracer.get()
 .buildSpan("writeKeyData")
 .startActive(true)) {
-  os.write(keyValue);
+  for (long nrRemaining = keySize - randomValue.length;
+nrRemaining > 0; nrRemaining -= bufferSize) {
+int curSize = (int)Math.min(bufferSize, nrRemaining);
+os.write(keyValueBuffer, 0, curSize);
 
 Review comment:
   You are right. No issue at the socket layer. I'm thinking of the DN side, 
the chunk files of the same key being written could be the same in this scheme. 
That might increase the write performance compared with 2GB fully random 
chunks.  As long we use it consistently, it should be fine. Later on, we will 
can an option to write 0 only by default and  random up to buffersize when a 
parameter is specified. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250589)
Time Spent: 5.5h  (was: 5h 20m)

> Ozone: Freon: Support big files larger than 2GB and add "--bufferSize" and 
> "--validateWrites" options.
> --
>
> Key: HDDS-1530
> URL: https://issues.apache.org/jira/browse/HDDS-1530
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> *Current problems:*
>  1. Freon does not support big files larger than 2GB because it use an int 
> type "keySize" parameter and also "keyValue" buffer size.
>  2. Freon allocates a entire buffer for each key at once, so if the key size 
> is large and the concurrency is high, freon will report OOM exception 
> frequently.
>  3. Freon lacks option such as "--validateWrites", thus users cannot manually 
> specify that verification is required after writing.
> *Some solutions:*
>  1. Use a long type "keySize" parameter, make sure freon can support big 
> files larger than 2GB.
>  2. Use a small buffer repeatedly than allocating the entire key-size buffer 
> at once, the default buffer size is 4K and can be configured by "–bufferSize" 
> parameter.
>  3. Add a "--validateWrites" option to Freon command line, users can provide 
> this option to indicate that a validation is required after write.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250574
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 22:38
Start Date: 29/May/19 22:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497134161
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 501 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 807 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 286 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 469 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 557 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 250 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1515 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6502 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux db3c35b0cc9f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0ead209 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/testReport/ |
   | Max. process+thread count | 3812 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250574)
Time Spent: 4h  (was: 3h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: 

[jira] [Commented] (HDDS-1593) Improve logging for failures during pipeline creation and usage.

2019-05-29 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851353#comment-16851353
 ] 

Siddharth Wagle commented on HDDS-1593:
---

[~msingh] But we do already print this information, do you have a log line of 
the failed attempt?

This is what will be printed if pipeline initialization failed:
{code}
String msg = "Pipeline initialization failed for pipeline:"
+ pipeline.getId() + " node:" + peer.getId();
{code}

> Improve logging for failures during pipeline creation and usage.
> 
>
> Key: HDDS-1593
> URL: https://issues.apache.org/jira/browse/HDDS-1593
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>
> When pipeline creation fails, then the pipeline ID along with all the nodes 
> in the pipeline should be printed. Also the node for which pipeline creation 
> failed should also be printed as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1593) Improve logging for failures during pipeline creation and usage.

2019-05-29 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1593:
-

Assignee: Siddharth Wagle

> Improve logging for failures during pipeline creation and usage.
> 
>
> Key: HDDS-1593
> URL: https://issues.apache.org/jira/browse/HDDS-1593
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>
> When pipeline creation fails, then the pipeline ID along with all the nodes 
> in the pipeline should be printed. Also the node for which pipeline creation 
> failed should also be printed as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250569
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 22:15
Start Date: 29/May/19 22:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497128592
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 15 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 554 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 791 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 303 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 483 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | cc | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 630 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 4 new + 5 unchanged - 0 fixed = 
9 total (was 5) |
   | +1 | findbugs | 503 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 242 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1007 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 5964 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle cc |
   | uname | Linux 1d3000f18253 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 751f0df |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/testReport/ |
   | Max. process+thread count | 5137 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250569)
Time Spent: 3h 50m  (was: 3h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop 

[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=250558=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250558
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 29/May/19 21:56
Start Date: 29/May/19 21:56
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497123569
 
 
   +1, pending Jekens.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250558)
Time Spent: 6h 20m  (was: 6h 10m)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=250552=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250552
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 29/May/19 21:52
Start Date: 29/May/19 21:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-497122435
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 526 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 810 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 133 | trunk passed |
   | 0 | spotbugs | 291 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 469 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 489 | the patch passed |
   | +1 | compile | 264 | the patch passed |
   | +1 | cc | 264 | the patch passed |
   | +1 | javac | 264 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 628 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 72 | hadoop-ozone generated 9 new + 5 unchanged - 0 fixed = 
14 total (was 5) |
   | +1 | findbugs | 491 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 113 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 4976 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 287efc08253b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 751f0df |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/testReport/ |
   | Max. process+thread count | 1328 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250552)
Time Spent: 6h 10m  (was: 6h)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: 

[jira] [Commented] (HDFS-14521) Suppress setReplication logging while replaying edits

2019-05-29 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851335#comment-16851335
 ] 

Kihwal Lee commented on HDFS-14521:
---

No test. It is a log line change.
Failed tests are not related.

> Suppress setReplication logging while replaying edits
> -
>
> Key: HDFS-14521
> URL: https://issues.apache.org/jira/browse/HDFS-14521
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HDFS-14521.patch
>
>
> Currently processing of setReplication on standby NNs cause logging.
> {noformat}
> 2101-14-29 17:49:04,026 [Edit log tailer] INFO namenode.FSDirectory: 
> Increasing replication from 3 to 10 for xxx
> {noformat}
> This should be suppressed during edit replaying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14195) OIV: print out storage policy id in oiv Delimited output

2019-05-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851310#comment-16851310
 ] 

Wei-Chiu Chuang commented on HDFS-14195:


+1 I think the patch looks good to me. I think it would be even better if it 
outputs the canonical name of the policy rather than a number (say ALL_SSD 
instead of 12) but it's easy to look up in the Hadoop doc so I'm okay with that.

[~adam.antal] let's use HDFS-14203 to add the switch to enable storage policy 
id. 

> OIV: print out storage policy id in oiv Delimited output
> 
>
> Key: HDFS-14195
> URL: https://issues.apache.org/jira/browse/HDFS-14195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HDFS-14195.001.patch, HDFS-14195.002.patch, 
> HDFS-14195.003.patch, HDFS-14195.004.patch, HDFS-14195.005.patch, 
> HDFS-14195.006.patch
>
>
> There is lacking of a method to get all folders and files with sort of 
> specified storage policy via command line, like ALL_SSD type.
> By adding storage policy id to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified storage 
> policy and to apply internal regulation based on this information.
>  
> Currently, for PBImageXmlWriter.java, in HDFS-9835 it added function to print 
> out xattr which including storage policy already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851309#comment-16851309
 ] 

Hadoop QA commented on HDFS-14513:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14513 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970223/HDFS-14513.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 991692d6c670 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / abf76ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26863/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26863/testReport/ |
| Max. process+thread count | 2811 (vs. 

[jira] [Commented] (HDFS-14521) Suppress setReplication logging while replaying edits

2019-05-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851305#comment-16851305
 ] 

Hadoop QA commented on HDFS-14521:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970229/HDFS-14521.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f688d70c202 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 751f0df |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26864/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26864/testReport/ |
| Max. process+thread count | 3943 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=250519=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-250519
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 29/May/19 21:05
Start Date: 29/May/19 21:05
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497108920
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 250519)
Time Spent: 3h 40m  (was: 3.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1542) Create Radix tree to support ozone prefix ACLs

2019-05-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16851299#comment-16851299
 ] 

Hudson commented on HDDS-1542:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16625 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16625/])
HDDS-1542. Create Radix tree to support ozone prefix ACLs. Contributed 
(aengineer: rev 0ead2090a65817db52f9dc687befa13bebb72d51)
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/RadixNode.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/util/RadixTree.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/util/TestRadixTree.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/util/package-info.java


> Create Radix tree to support ozone prefix ACLs 
> ---
>
> Key: HDDS-1542
> URL: https://issues.apache.org/jira/browse/HDDS-1542
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Create Radix tree to support ozone prefix ACLs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >