[jira] [Commented] (HDFS-14511) FSEditlog write both Quorum Journal and Local disk by default in HA using QJM scenario

2019-05-23 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847261#comment-16847261
 ] 

He Xiaoqiao commented on HDFS-14511:


Thanks [~ayushtkn] for your quick comment, I believe it is same issue. And I 
linked to HDFS-12733. Any thought about HDFS-12733 or here?

> FSEditlog write both Quorum Journal and Local disk by default in HA using QJM 
> scenario
> --
>
> Key: HDFS-14511
> URL: https://issues.apache.org/jira/browse/HDFS-14511
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>
> Recently, I meet case about FSEditLog in HA using QJM scenario. NameNode 
> enter suspended state and can not process other RPC requests any more. 
> The root cause is load of local disk is very high, it will block edit log 
> recored flush local, then following RPC request will occupy all RPC handlers 
> since #FSEditLog write edit log record to both FileJournal (which is local 
> directory located at the same as FsImage) and QuorumJournal in proper order 
> by default and no configuration to switch off FileJournal. However local edit 
> log is not used any time soon.
> More detailed information, the location where edit log write to is decided by 
> configuration items 'dfs.namenode.shared.edits.dir' and 
> 'dfs.namenode.name.dir' (since 'dfs.namenode.edits.dir' is deprecated item, 
> if not set it will be overrided/replaced by 'dfs.namenode.name.dir' where 
> fsimage located.) by default. So JournalSet = QuorumJournal (SharedEditsDirs, 
> set by 'dfs.namenode.shared.edits.dir') + FileJournal (LocalStorageEditsDirs, 
> set by 'dfs.namenode.name.dir' by default). Another side, these two config 
> items have to set in HA with QJM.
> In one word, edit log is double write to both QJM and local disk by default 
> and no way to turn off local write with current implementation. I propose we 
> should offer some choice or turn off local edit log write by default in HA 
> using QJM for users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847258#comment-16847258
 ] 

Hadoop QA commented on HDDS-1458:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
2s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
4s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 23m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
9s{color} | {color:orange} Error running pylint. Please check pylint stderr 
files. {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green}  0m  
9s{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}160m 45s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}362m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Commented] (HDFS-14511) FSEditlog write both Quorum Journal and Local disk by default in HA using QJM scenario

2019-05-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847254#comment-16847254
 ] 

Ayush Saxena commented on HDFS-14511:
-

Seems similar to HDFS-12733

[~hexiaoqiao] can you give a check, if it is related?

> FSEditlog write both Quorum Journal and Local disk by default in HA using QJM 
> scenario
> --
>
> Key: HDFS-14511
> URL: https://issues.apache.org/jira/browse/HDFS-14511
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>
> Recently, I meet case about FSEditLog in HA using QJM scenario. NameNode 
> enter suspended state and can not process other RPC requests any more. 
> The root cause is load of local disk is very high, it will block edit log 
> recored flush local, then following RPC request will occupy all RPC handlers 
> since #FSEditLog write edit log record to both FileJournal (which is local 
> directory located at the same as FsImage) and QuorumJournal in proper order 
> by default and no configuration to switch off FileJournal. However local edit 
> log is not used any time soon.
> More detailed information, the location where edit log write to is decided by 
> configuration items 'dfs.namenode.shared.edits.dir' and 
> 'dfs.namenode.name.dir' (since 'dfs.namenode.edits.dir' is deprecated item, 
> if not set it will be overrided/replaced by 'dfs.namenode.name.dir' where 
> fsimage located.) by default. So JournalSet = QuorumJournal (SharedEditsDirs, 
> set by 'dfs.namenode.shared.edits.dir') + FileJournal (LocalStorageEditsDirs, 
> set by 'dfs.namenode.name.dir' by default). Another side, these two config 
> items have to set in HA with QJM.
> In one word, edit log is double write to both QJM and local disk by default 
> and no way to turn off local write with current implementation. I propose we 
> should offer some choice or turn off local edit log write by default in HA 
> using QJM for users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847252#comment-16847252
 ] 

Takanobu Asanuma commented on HDFS-14475:
-

[~crh] Thanks for working on this.

{{NamenodeBeanMetrics}} has already {{isSecurityEnabled}} method. It would be 
better to overwrite it instead of creating new one.

We also may be able to rename MBeans in {{NamenodeBeanMetrics}} since 
"NameNodeInfo" and "NameNodeStatus" are misleading.

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14511) FSEditlog write both Quorum Journal and Local disk by default in HA using QJM scenario

2019-05-23 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847249#comment-16847249
 ] 

He Xiaoqiao commented on HDFS-14511:


[~shv] offer one valid solution that set 'dfs.namenode.shared.edits.dir' and 
'dfs.namenode.edits.dir' both point to Quorum Journal, then FSEditlog will not 
write local disk, it is trick but indeed feasible solution.
{code:java}

dfs.namenode.shared.edits.dir
qjournal://qjn1:8485;qjn2:8485;qjn3:8485/myCluster
  
  
dfs.namenode.edits.dir
qjournal://qjn1:8485;qjn2:8485;qjn3:8485/myCluster
  
{code}
I think we should update code logic and solve this issue completely.

> FSEditlog write both Quorum Journal and Local disk by default in HA using QJM 
> scenario
> --
>
> Key: HDFS-14511
> URL: https://issues.apache.org/jira/browse/HDFS-14511
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, qjm
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>
> Recently, I meet case about FSEditLog in HA using QJM scenario. NameNode 
> enter suspended state and can not process other RPC requests any more. 
> The root cause is load of local disk is very high, it will block edit log 
> recored flush local, then following RPC request will occupy all RPC handlers 
> since #FSEditLog write edit log record to both FileJournal (which is local 
> directory located at the same as FsImage) and QuorumJournal in proper order 
> by default and no configuration to switch off FileJournal. However local edit 
> log is not used any time soon.
> More detailed information, the location where edit log write to is decided by 
> configuration items 'dfs.namenode.shared.edits.dir' and 
> 'dfs.namenode.name.dir' (since 'dfs.namenode.edits.dir' is deprecated item, 
> if not set it will be overrided/replaced by 'dfs.namenode.name.dir' where 
> fsimage located.) by default. So JournalSet = QuorumJournal (SharedEditsDirs, 
> set by 'dfs.namenode.shared.edits.dir') + FileJournal (LocalStorageEditsDirs, 
> set by 'dfs.namenode.name.dir' by default). Another side, these two config 
> items have to set in HA with QJM.
> In one word, edit log is double write to both QJM and local disk by default 
> and no way to turn off local write with current implementation. I propose we 
> should offer some choice or turn off local edit log write by default in HA 
> using QJM for users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14511) FSEditlog write both Quorum Journal and Local disk by default in HA using QJM scenario

2019-05-23 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HDFS-14511:
--

 Summary: FSEditlog write both Quorum Journal and Local disk by 
default in HA using QJM scenario
 Key: HDFS-14511
 URL: https://issues.apache.org/jira/browse/HDFS-14511
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, qjm
Reporter: He Xiaoqiao
Assignee: He Xiaoqiao


Recently, I meet case about FSEditLog in HA using QJM scenario. NameNode enter 
suspended state and can not process other RPC requests any more. 
The root cause is load of local disk is very high, it will block edit log 
recored flush local, then following RPC request will occupy all RPC handlers 
since #FSEditLog write edit log record to both FileJournal (which is local 
directory located at the same as FsImage) and QuorumJournal in proper order by 
default and no configuration to switch off FileJournal. However local edit log 
is not used any time soon.
More detailed information, the location where edit log write to is decided by 
configuration items 'dfs.namenode.shared.edits.dir' and 'dfs.namenode.name.dir' 
(since 'dfs.namenode.edits.dir' is deprecated item, if not set it will be 
overrided/replaced by 'dfs.namenode.name.dir' where fsimage located.) by 
default. So JournalSet = QuorumJournal (SharedEditsDirs, set by 
'dfs.namenode.shared.edits.dir') + FileJournal (LocalStorageEditsDirs, set by 
'dfs.namenode.name.dir' by default). Another side, these two config items have 
to set in HA with QJM.
In one word, edit log is double write to both QJM and local disk by default and 
no way to turn off local write with current implementation. I propose we should 
offer some choice or turn off local edit log write by default in HA using QJM 
for users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-23 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847244#comment-16847244
 ] 

Supratim Deka commented on HDDS-700:


looks like the checkstyle issues reported in patch 03 slipped by and made it 
into the commit. 

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1454) GC other system pause events can trigger pipeline destroy for all the nodes in the cluster

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1454:

Affects Version/s: (was: 0.3.0)

> GC other system pause events can trigger pipeline destroy for all the nodes 
> in the cluster
> --
>
> Key: HDDS-1454
> URL: https://issues.apache.org/jira/browse/HDDS-1454
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> In a MiniOzoneChaosCluster run it was observed that events like GC pauses or 
> any other pauses in SCM can mark all the datanodes as stale in SCM. This will 
> trigger multiple pipeline destroy and will render the system unusable. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-23 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-14508:
---

Assignee: Takanobu Asanuma

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847223#comment-16847223
 ] 

Takanobu Asanuma commented on HDFS-14508:
-

 Thanks for filing the issue. I'd like to work on this jira.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Priority: Minor
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247856=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247856
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 24/May/19 03:46
Start Date: 24/May/19 03:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-495462188
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 984 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 551 | trunk passed |
   | +1 | compile | 262 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 806 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 138 | trunk passed |
   | 0 | spotbugs | 291 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 493 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-hdds: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) |
   | -0 | checkstyle | 37 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 3 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 135 | the patch passed |
   | -1 | findbugs | 196 | hadoop-hdds generated 6 new + 0 unchanged - 0 fixed 
= 6 total (was 0) |
   | -1 | findbugs | 292 | hadoop-ozone generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 173 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1426 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 11805 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkIndex; locked 88% of 
time  Unsynchronized access at BlockInputStream.java:88% of time  
Unsynchronized access at BlockInputStream.java:[line 379] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkOffsets; locked 88% of 
time  Unsynchronized access at BlockInputStream.java:88% of time  
Unsynchronized access at BlockInputStream.java:[line 336] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkStreams; locked 92% of 
time  Unsynchronized access at BlockInputStream.java:92% of time  
Unsynchronized access at BlockInputStream.java:[line 336] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.allocated; locked 50% of 
time  Unsynchronized access at ChunkInputStream.java:50% of time  
Unsynchronized access at ChunkInputStream.java:[line 501] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.bufferLength; locked 50% of 
time  Unsynchronized access at ChunkInputStream.java:50% of time  
Unsynchronized access at ChunkInputStream.java:[line 491] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.ChunkInputStream.bufferOffset; locked 53% of 
time  Unsynchronized access at ChunkInputStream.java:53% of time  
Unsynchronized access at ChunkInputStream.java:[line 491] |
   | FindBugs | module:hadoop-ozone |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.client.io.KeyInputStream.blockIndex; locked 92% of time 
 Unsynchronized access at KeyInputStream.java:92% of time  Unsynchronized 
access at KeyInputStream.java:[line 238] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.ozone.client.io.KeyInputStream.blockOffsets; locked 66% of 
time  Unsynchronized access at KeyInputStream.java:66% of time  Unsynchronized 
access at KeyInputStream.java:[line 91] |
   | Failed junit tests | hadoop.hdds.scm.storage.TestBlockInputStream 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247854
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 24/May/19 03:46
Start Date: 24/May/19 03:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287206952
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,536 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
+  // The offset of the current data residing in the buffers w.r.t the start
+  // of chunk data
+  private long bufferOffset;
+  
+  // The number of bytes of chunk data residing in the buffers currently
+  private long bufferLength;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247854)
Time Spent: 4h 10m  (was: 4h)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247855
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 24/May/19 03:46
Start Date: 24/May/19 03:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287206947
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,536 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247855)
Time Spent: 4h 20m  (was: 4h 10m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247853=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247853
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 24/May/19 03:46
Start Date: 24/May/19 03:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287206948
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,536 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
+  private int bufferIndex;
+  
+  // The offset of the current data residing in the buffers w.r.t the start
+  // of chunk data
+  private long bufferOffset;
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247853)
Time Spent: 4h  (was: 3h 50m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1587) Support dynamically adding delegated class to filteredclass loader

2019-05-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1587:


 Summary: Support dynamically adding delegated class to 
filteredclass loader
 Key: HDDS-1587
 URL: https://issues.apache.org/jira/browse/HDDS-1587
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.4.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HDDS-922 added a filtered class loader with a list of delegated classes that 
will be loaded with the app launcher's classloader. With security enabled on 
ozone-0.4, there are some incompatible changes from Hadoop-common and 
hadoop-auth module from Hadoop-2.x to Hadoop-3.x. Some examples can be seen 
HDDS-1080, where the fix has to be made along with a rebuild/release. 

 

This ticket is opened to allow dynamically adding delegated classes or class 
prefix via environment variable. This way, we can easily adjust the setting in 
different deployment without rebuild/release.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=247843=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247843
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 24/May/19 03:13
Start Date: 24/May/19 03:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-495457257
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 42 | Maven dependency ordering for branch |
   | +1 | mvninstall | 510 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 866 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 525 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 518 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | cc | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 2 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | -1 | findbugs | 316 | hadoop-ozone generated 7 new + 0 unchanged - 0 fixed 
= 7 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 157 | hadoop-hdds in the patch failed. |
   | -1 | unit | 111 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 5174 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Nullcheck of volumeArgs at line 536 of value previously dereferenced 
in org.apache.hadoop.ozone.om.VolumeManagerImpl.addAcl(OzoneObj, OzoneAcl)  At 
VolumeManagerImpl.java:536 of value previously dereferenced in 
org.apache.hadoop.ozone.om.VolumeManagerImpl.addAcl(OzoneObj, OzoneAcl)  At 
VolumeManagerImpl.java:[line 536] |
   |  |  Nullcheck of volumeArgs at line 581 of value previously dereferenced 
in org.apache.hadoop.ozone.om.VolumeManagerImpl.removeAcl(OzoneObj, OzoneAcl)  
At VolumeManagerImpl.java:581 of value previously dereferenced in 
org.apache.hadoop.ozone.om.VolumeManagerImpl.removeAcl(OzoneObj, OzoneAcl)  At 
VolumeManagerImpl.java:[line 581] |
   |  |  Nullcheck of volumeArgs at line 627 of value previously dereferenced 
in org.apache.hadoop.ozone.om.VolumeManagerImpl.setAcl(OzoneObj, List)  At 
VolumeManagerImpl.java:627 of value previously dereferenced in 
org.apache.hadoop.ozone.om.VolumeManagerImpl.setAcl(OzoneObj, List)  At 
VolumeManagerImpl.java:[line 627] |
   |  |  Return value of java.util.Objects.nonNull(Object) ignored, but method 
has no side effect  At OmOzoneAclMap.java:but method has no side effect  At 
OmOzoneAclMap.java:[line 79] |
   |  |  Return value of java.util.Objects.nonNull(Object) ignored, but method 
has no side effect  At OmOzoneAclMap.java:but method has no side effect  At 
OmOzoneAclMap.java:[line 118] |
   |  |  Return value of java.util.Objects.nonNull(Object) ignored, but method 
has no side effect  At OmOzoneAclMap.java:but method has no side effect  At 
OmOzoneAclMap.java:[line 105] |
   |  |  Return value of java.util.Objects.nonNull(Object) ignored, but method 
has no side effect  At OmOzoneAclMap.java:but method has no side effect  At 
OmOzoneAclMap.java:[line 91] |
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-847/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 499c900f01b4 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 

[jira] [Work logged] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1559?focusedWorklogId=247840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247840
 ]

ASF GitHub Bot logged work on HDDS-1559:


Author: ASF GitHub Bot
Created on: 24/May/19 02:47
Start Date: 24/May/19 02:47
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #841: HDDS-1559. Include 
committedBytes to determine Out of Space in VolumeChoosingPolicy. Contributed 
by Supratim Deka
URL: https://github.com/apache/hadoop/pull/841#issuecomment-495453061
 
 
   +1 pending the pre-commit run.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247840)
Time Spent: 50m  (was: 40m)

> Include committedBytes to determine Out of Space in VolumeChoosingPolicy
> 
>
> Key: HDDS-1559
> URL: https://issues.apache.org/jira/browse/HDDS-1559
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This is a follow-up from HDDS-1511 and HDDS-1535
> Currently  when creating a new Container, the DN invokes 
> RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for 
> (volume available space > container max size). If no eligible volume is 
> found, the policy throws a DiskOutOfSpaceException. This is the current 
> behaviour.
> However, the computation of available space does not take into consideration 
> the space
> that is going to be consumed by writes to existing containers which are still 
> Open and accepting chunk writes.
> This Jira proposes to enhance the space availability check in chooseVolume by 
> inclusion of committed space(committedBytes in HddsVolume) in the equation.
> The handling/management of the exception in Ratis will not be modified in 
> this Jira. That will be scoped separately as part of Datanode IO Failure 
> handling work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-05-23 Thread Yuxuan Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847190#comment-16847190
 ] 

Yuxuan Wang commented on HDFS-14509:


Hi~, [~kihwal] [~brahmareddy], thanks for comment. It's the same issue as 
HDFS-6708 after I take a look.
I wonder if we can use {{token.getIdentifier()}} instead of following when 
compute password.
{code}
public byte[] retrievePassword(BlockTokenIdentifier identifier)
{
...
return createPassword(identifier.getBytes(), key.getKey());
}
{code}

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Major
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-23 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847188#comment-16847188
 ] 

Sammi Chen edited comment on HDDS-700 at 5/24/19 2:27 AM:
--

Thanks [~xyao] for helping review and commit the patch. 

{quote}DatanodeDetails.java 
Line 357: if the network topology has additional layers above RACK, should we 
consider a more generic default network location?{quote}
I thought the same. It would better to get the default network location from 
NetworkTopologyInstance. But that means NetworkTopologyInstance need to passed 
in as a parameter when instantiating a DatanodeDetails object.  Will think 
about how to refactor this part in follow JIRAs. 


was (Author: sammi):
bq. Thanks [~xyao] for helping review and commit the patch. 

{quote}DatanodeDetails.java 
Line 357: if the network topology has additional layers above RACK, should we 
consider a more generic default network location?{quote}
I thought the same. It would better to get the default network location from 
NetworkTopologyInstance. But that means NetworkTopologyInstance need to passed 
in as a parameter when instantiating a DatanodeDetails object.  Will think 
about how to refactor this part in follow JIRAs. 

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-23 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847188#comment-16847188
 ] 

Sammi Chen commented on HDDS-700:
-

bq. Thanks [~xyao] for helping review and commit the patch. 

{quote}DatanodeDetails.java 
Line 357: if the network topology has additional layers above RACK, should we 
consider a more generic default network location?{quote}
I thought the same. It would better to get the default network location from 
NetworkTopologyInstance. But that means NetworkTopologyInstance need to passed 
in as a parameter when instantiating a DatanodeDetails object.  Will think 
about how to refactor this part in follow JIRAs. 

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=247822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247822
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 24/May/19 01:47
Start Date: 24/May/19 01:47
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #847: HDDS-1539. Implement 
addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847#issuecomment-495443352
 
 
   adding initial draft .
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247822)
Time Spent: 20m  (was: 10m)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-23 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1539:
-
Status: Patch Available  (was: In Progress)

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?focusedWorklogId=247821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247821
 ]

ASF GitHub Bot logged work on HDDS-1539:


Author: ASF GitHub Bot
Created on: 24/May/19 01:46
Start Date: 24/May/19 01:46
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #847: HDDS-1539. 
Implement addAcl,removeAcl,setAcl,getAcl for Volume. Contributed Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/847
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247821)
Time Spent: 10m
Remaining Estimate: 0h

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1539) Implement addAcl,removeAcl,setAcl,getAcl for Volume

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1539:
-
Labels: pull-request-available  (was: )

> Implement addAcl,removeAcl,setAcl,getAcl for Volume
> ---
>
> Key: HDDS-1539
> URL: https://issues.apache.org/jira/browse/HDDS-1539
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>
> Implement addAcl,removeAcl,setAcl,getAcl for Volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14440) RBF: Optimize the file write process in case of multiple destinations.

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847162#comment-16847162
 ] 

Íñigo Goiri commented on HDFS-14440:


As far as we understand what the trade-off is, it is fine with me.
If you guys want to go with the approach in [^HDFS-14440-HDFS-13891-06.patch], 
go ahead.
I'm +1 for everything except the invokeSequential() vs invokeConcurrent(); for 
that I'm +0 so no blocker here.

> RBF: Optimize the file write process in case of multiple destinations.
> --
>
> Key: HDFS-14440
> URL: https://issues.apache.org/jira/browse/HDFS-14440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14440-HDFS-13891-01.patch, 
> HDFS-14440-HDFS-13891-02.patch, HDFS-14440-HDFS-13891-03.patch, 
> HDFS-14440-HDFS-13891-04.patch, HDFS-14440-HDFS-13891-05.patch, 
> HDFS-14440-HDFS-13891-06.patch
>
>
> In case of multiple destinations, We need to check if the file already exists 
> in one of the subclusters for which we use the existing getBlockLocation() 
> API which is by default a sequential Call,
> In an ideal scenario where the file needs to be created each subcluster shall 
> be checked sequentially, this can be done concurrently to save time.
> In another case where the file is found and if the last block is null, we 
> need to do getFileInfo to all the locations to get the location where the 
> file exists. This also can be prevented by use of ConcurrentCall since we 
> shall be having the remoteLocation to where the getBlockLocation returned a 
> non null entry.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247802
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 24/May/19 01:04
Start Date: 24/May/19 01:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495436029
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 504 | trunk passed |
   | +1 | compile | 274 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 285 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 469 | trunk passed |
   | -0 | patch | 351 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 482 | the patch passed |
   | +1 | compile | 281 | the patch passed |
   | +1 | javac | 281 | the patch passed |
   | +1 | checkstyle | 93 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 500 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 149 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1405 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 10855 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.web.client.TestKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/810 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 63f6212146ca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 460ba7f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/14/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/14/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/14/testReport/ |
   | Max. process+thread count | 5333 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247802)
Time Spent: 9.5h  (was: 9h 20m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: 

[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847148#comment-16847148
 ] 

Íñigo Goiri commented on HDFS-13787:


[~RANith], sorry for taking over but with the collision it was easier to fix 
the warnings myself.
I think  [^HDFS-13787-HDFS-13891.008.patch] is pretty much it and ready for the 
final review.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787-HDFS-13891.007.patch, 
> HDFS-13787-HDFS-13891.008.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847147#comment-16847147
 ] 

Íñigo Goiri commented on HDFS-14090:


Much cleaner than I expected.
The tough part will be to have proper testing but I think it should be doable.
We probably want to make the policy pluggable and I don't know if we cna 
extract some of the fair queues already available as one of the implementations.

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, RBF_ Isolation 
> design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847141#comment-16847141
 ] 

Hadoop QA commented on HDFS-13787:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
44s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969561/HDFS-13787-HDFS-13891.008.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f81c21e53c4b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26837/testReport/ |
| Max. process+thread count | 1364 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26837/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247782=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247782
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 23:44
Start Date: 23/May/19 23:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495422900
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 504 | trunk passed |
   | +1 | compile | 251 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 301 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 498 | trunk passed |
   | -0 | patch | 346 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 463 | the patch passed |
   | +1 | compile | 284 | the patch passed |
   | +1 | javac | 284 | the patch passed |
   | +1 | checkstyle | 92 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 690 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 491 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 163 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1162 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 6047 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/810 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 73434c5070d4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a0e7dd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/15/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/15/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/15/testReport/ |
   | Max. process+thread count | 4869 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247782)
Time Spent: 9h 20m  (was: 9h 10m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847137#comment-16847137
 ] 

Eric Yang commented on HDDS-1458:
-

[~arp] yes.  Patch 13 removes the dependency for now until HDDS-1495 creates 
the tarball using assembly plugin in maven repository cache, then the 
dependency can be established.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Attachment: HDDS-1458.013.patch

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-23 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847128#comment-16847128
 ] 

CR Hota commented on HDFS-14090:


[~aajisaka]  [~elgoiri]  [~brahmareddy]  [~surendrasingh]  [~ayushtkn]

Attached a patch which highlights how the changes will look like and overall 
idea. It's not final. Please take a look and share thoughts.

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, RBF_ Isolation 
> design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-23 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14090:
---
Attachment: HDFS-14090-HDFS-13891.001.patch

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, RBF_ Isolation 
> design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847126#comment-16847126
 ] 

Hudson commented on HDDS-1501:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16597 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16597/])
 HDDS-1501 : Create a Recon task interface to update internal DB on (arp7: rev 
4b099b8b890cc578b13630369ef44a42ecd6496c)
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/recovery/ReconOmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestContainerKeyMapperTask.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServer.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdateEvent.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskController.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServerConfigKeys.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconDBUpdateTask.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/persistence/TestReconInternalSchemaDefinition.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestOMDBUpdatesHandler.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/DummyReconDBTask.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/recovery/TestReconOmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-ozone/ozone-recon-codegen/src/main/java/org/hadoop/ozone/recon/codegen/ReconSchemaGenerationModule.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestRDBStore.java
* (add) 
hadoop-ozone/ozone-recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/ReconInternalSchemaDefinition.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/persistence/AbstractSqlDatabaseTest.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestContainerDBServiceProviderImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java


> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247758=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247758
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 22:50
Start Date: 23/May/19 22:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495412132
 
 
   Test failures are fixed in the last commit. Now tests are passing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247758)
Time Spent: 9h 10m  (was: 9h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247757
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 22:47
Start Date: 23/May/19 22:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495411571
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 279 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 1 | trunk passed |
   | +1 | shadedclient | 866 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | trunk passed |
   | 0 | spotbugs | 287 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 480 | trunk passed |
   | -0 | patch | 327 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 464 | the patch passed |
   | +1 | compile | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 621 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 514 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 4951 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/810 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c001d3ecddda 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6a0e7dd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/13/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/13/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-810/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247757)
Time Spent: 9h  (was: 8h 50m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 

[jira] [Updated] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1501:

  Resolution: Fixed
   Fix Version/s: (was: 0.4.1)
Target Version/s: 0.5.0
  Status: Resolved  (was: Patch Available)

I've committed this to trunk. Thanks for the contribution [~avijayan]!

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1501) Create a Recon task interface that is used to update the aggregate DB whenever updates from OM are received.

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1501?focusedWorklogId=247754=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247754
 ]

ASF GitHub Bot logged work on HDDS-1501:


Author: ASF GitHub Bot
Created on: 23/May/19 22:35
Start Date: 23/May/19 22:35
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #819:  HDDS-1501 : 
Create a Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247754)
Time Spent: 4.5h  (was: 4h 20m)

> Create a Recon task interface that is used to update the aggregate DB 
> whenever updates from OM are received.
> 
>
> Key: HDDS-1501
> URL: https://issues.apache.org/jira/browse/HDDS-1501
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847114#comment-16847114
 ] 

Arpit Agarwal commented on HDDS-1458:
-

[~eyang] helped me debug this offline. Looks like this error was introduced by 
the following dependency:
{code}
 
   org.apache.hadoop
   hadoop-ozone-dist
   tar.gz
 
{code}

Can we leave this dependency out for now?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847100#comment-16847100
 ] 

Íñigo Goiri commented on HDFS-13787:


Thanks [~ayushtkn] for the comments, tackled them in 
[^HDFS-13787-HDFS-13891.008.patch].

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787-HDFS-13891.007.patch, 
> HDFS-13787-HDFS-13891.008.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: HDFS-13787-HDFS-13891.008.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787-HDFS-13891.007.patch, 
> HDFS-13787-HDFS-13891.008.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847096#comment-16847096
 ] 

Hadoop QA commented on HDFS-13909:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
52s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  3s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13909 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969559/HDFS-13909-HDFS-13891-05.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f678fd04d0a0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26836/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26836/testReport/ |
| Max. process+thread count | 1373 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-23 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847094#comment-16847094
 ] 

Takanobu Asanuma commented on HDFS-13955:
-

[~crh] [~elgoiri]
+1 for v004. Thanks!

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1565) Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1565?focusedWorklogId=247727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247727
 ]

ASF GitHub Bot logged work on HDDS-1565:


Author: ASF GitHub Bot
Created on: 23/May/19 21:52
Start Date: 23/May/19 21:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #840: HDDS-1565. 
Rename k8s-dev and k8s-dev-push profiles to docker and docker-push
URL: https://github.com/apache/hadoop/pull/840#issuecomment-495398425
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247727)
Time Spent: 40m  (was: 0.5h)

> Rename k8s-dev and k8s-dev-push profiles to docker-build and docker-push
> 
>
> Key: HDDS-1565
> URL: https://issues.apache.org/jira/browse/HDDS-1565
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Based on the feedback from [~eyang] I realized that the names of the k8s-dev 
> and k8s-dev-push profiles are not expressive enough as the created containers 
> can be used not only for kubernetes but can be used together with any other 
> container orchestrator.
> I propose to rename them to docker-build/docker-push.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847092#comment-16847092
 ] 

Hadoop QA commented on HDFS-13909:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 55s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13909 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969557/HDFS-13909-HDFS-13891-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7e108606b9b0 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26835/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26835/testReport/ |
| Max. process+thread count | 1021 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Created] (HDDS-1586) Allow Ozone RPC client to read with topology awareness

2019-05-23 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1586:


 Summary: Allow Ozone RPC client to read with topology awareness
 Key: HDDS-1586
 URL: https://issues.apache.org/jira/browse/HDDS-1586
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


The idea is to leverage the node location from the block locations and perfer 
read from closer block replicas. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-23 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-700:

   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks [~Sammi] for the contribution and all for the reviews. I've commit the 
patch to trunk. 

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.4.1
>
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847069#comment-16847069
 ] 

Hadoop QA commented on HDFS-13787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 41s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969556/HDFS-13787-HDFS-13891.007.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7835ccc55967 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26834/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26834/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |

[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847061#comment-16847061
 ] 

Íñigo Goiri commented on HDFS-13955:


Thanks [~tasanuma] for catching this; anything else?

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247696=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247696
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 21:14
Start Date: 23/May/19 21:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-495387480
 
 
   Thank You @arp7 for the review.
   Addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247696)
Time Spent: 8h 50m  (was: 8h 40m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247694=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247694
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 21:13
Start Date: 23/May/19 21:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287136965
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247695
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 21:13
Start Date: 23/May/19 21:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287136985
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247692=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247692
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 21:13
Start Date: 23/May/19 21:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287136934
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247693=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247693
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 21:13
Start Date: 23/May/19 21:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287136952
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+

[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-23 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847048#comment-16847048
 ] 

Arpit Agarwal commented on HDDS-1458:
-

I get the following error with the v12 patch:

{code}
~/ozone/hadoop-ozone/fault-injection-test/network-tests$ mvn verify -Pit
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
[INFO] Scanning for projects...
[INFO]
[INFO] < org.apache.hadoop:hadoop-ozone-network-tests >
[INFO] Building Apache Hadoop Ozone Network Tests 0.5.0-SNAPSHOT
[INFO] [ jar ]-
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  1.419 s
[INFO] Finished at: 2019-05-23T13:58:08-07:00
[INFO] 
[ERROR] Failed to execute goal on project hadoop-ozone-network-tests: Could not 
resolve dependencies for project 
org.apache.hadoop:hadoop-ozone-network-tests:jar:0.5.0-SNAPSHOT: Failure to 
find org.apache.hadoop:hadoop-ozone-dist:tar.gz:0.5.0-SNAPSHOT in 
https://repository.apache.org/content/repositories/snapshots was cached in the 
local repository, resolution will not be reattempted until the update interval 
of apache.snapshots.https has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
{code}

Was not seeing this before. I've already done a clean build+install before 
running _mvn verify_.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1551:

Target Version/s: 0.5.0

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
>  # Implement checkAcl's method with new Request classes. As in Grpc Context, 
> we shall not have UGI object, we need to set userName and remotehostAddress 
> during pre-Execute step, and use this information to construct UGI and 
> InetAddress and then call checkAcl.
>  # Implement takeSnapshot once after flush to OM DB is completed.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1551:

Fix Version/s: (was: 0.4.1)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
>  # Implement checkAcl's method with new Request classes. As in Grpc Context, 
> we shall not have UGI object, we need to set userName and remotehostAddress 
> during pre-Execute step, and use this information to construct UGI and 
> InetAddress and then call checkAcl.
>  # Implement takeSnapshot once after flush to OM DB is completed.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13909:

Attachment: HDFS-13909-HDFS-13891-05.patch

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch, 
> HDFS-13909-HDFS-13891-04.patch, HDFS-13909-HDFS-13891-05.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847040#comment-16847040
 ] 

Ayush Saxena commented on HDFS-13787:
-

Thanx for the patch. Couple of comments :
* The quota check in the locations can be avoided, No need to verify quota for 
unrelated operations(On default it checks, Pass explicitly false).
* 
{code:java}
+  /**
+   * Merge the outputs from multiple namespaces.
+   *
+   * @param  The type of the objects to merge.
+   * @param list Namespace to output array.
+   * @param clazz Class of the values.
+   * @return Array with the outputs.
+   */
+  protected static  T[] merge(List list, Class clazz) {
+
+// Put all results into a set to avoid repeats
+Set ret = new LinkedHashSet<>();
+for (T[] values : list) {
+  for (T val : values) {
+ret.add(val);
+  }
+}
+return toArray(ret, clazz);
+  }
+
+  /**
+   * Convert a set of values into an array.
+   * @param  The type of the return objects.
+   * @param set Input set.
+   * @param clazz Class of the values.
+   * @return Array with the values in set.
+   */
+  private static  T[] toArray(Collection set, Class clazz) {
+@SuppressWarnings("unchecked")
+T[] combinedData = (T[]) Array.newInstance(clazz, set.size());
+combinedData = set.toArray(combinedData);
+return combinedData;
+  }
{code}

This is already in {{RouterRpcServer}}, it can be used from there only, rather 
than rewriting the same code.
* Instead of {{isPathAll}} I think {{isInvokeConcurrent}} should be used in 
{{getSnapshotDiffReportListing}}
*  
{code:java}
+  RemoteLocation loc0 = locations.get(0);
+  return (SnapshotDiffReportListing) rpcClient.invokeSingle(
+  loc0, remoteMethod);
+}
{code}

Why not {{invokeSequential}} ?
* 
{code:java}
+  private final ActiveNamenodeResolver namenodeResolver;
+
+
+  public RouterSnapshot(RouterRpcServer server) {
{code}

Avoid extra line.




> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787-HDFS-13891.007.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847032#comment-16847032
 ] 

Ayush Saxena commented on HDFS-13909:
-

Thanx [~elgoiri] for the review.
I found only that way out without being hacky. I have refactored as you said.
Pls Review!!!

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch, 
> HDFS-13909-HDFS-13891-04.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13909:

Attachment: HDFS-13909-HDFS-13891-04.patch

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch, 
> HDFS-13909-HDFS-13891-04.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847031#comment-16847031
 ] 

Hadoop QA commented on HDFS-13787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 22s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969551/HDFS-13787-HDFS-13891.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8912804c9176 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26833/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26833/artifact/out/whitespace-eol.txt
 |
| 

[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: HDFS-13787-HDFS-13891.007.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787-HDFS-13891.007.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847021#comment-16847021
 ] 

Hadoop QA commented on HDFS-13909:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
40s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13909 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969548/HDFS-13909-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ed5b2a6fc81 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26832/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26832/testReport/ |
| Max. process+thread count | 1460 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847018#comment-16847018
 ] 

Hadoop QA commented on HDFS-13787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 6 unchanged - 0 fixed = 9 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
37s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969551/HDFS-13787-HDFS-13891.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 859c528680a0 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26831/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26831/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26831/testReport/ |
| Max. process+thread count | 1373 

[jira] [Updated] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1496:

Status: Patch Available  (was: Open)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247659=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247659
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 20:00
Start Date: 23/May/19 20:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287111022
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+  new 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247656=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247656
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 20:00
Start Date: 23/May/19 20:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287111228
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+  new 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247658=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247658
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 20:00
Start Date: 23/May/19 20:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287110363
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
 
 Review comment:
   If multiple test methods are calling setup(), it should probably be a 
`@Before` method.
 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=247657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247657
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 23/May/19 20:00
Start Date: 23/May/19 20:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r287110914
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,397 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Queue;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.ozone.om.response.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.OMBucketDeleteResponse;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMVolumeCreateResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.VolumeList;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.Time;
+
+
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.junit.Assert.fail;
+
+/**
+ * This class tests OzoneManagerDouble Buffer.
+ */
+public class TestOzoneManagerDoubleBuffer {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneManagerDoubleBuffer doubleBuffer;
+  private AtomicLong trxId = new AtomicLong(0);
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private void setup() throws IOException  {
+OzoneConfiguration configuration = new OzoneConfiguration();
+configuration.set(OZONE_METADATA_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager =
+new OmMetadataManagerImpl(configuration);
+doubleBuffer = new OzoneManagerDoubleBuffer(omMetadataManager);
+  }
+
+  private void stop() {
+doubleBuffer.stop();
+  }
+
+  @Test(timeout = 300_000)
+  public void testDoubleBufferWithDummyResponse() throws Exception {
+try {
+  setup();
+  String volumeName = UUID.randomUUID().toString();
+  int bucketCount = 100;
+  for (int i=0; i < bucketCount; i++) {
+doubleBuffer.add(createDummyBucketResponse(volumeName,
+UUID.randomUUID().toString()), trxId.incrementAndGet());
+  }
+  GenericTestUtils.waitFor(() ->
+  doubleBuffer.getFlushedTransactionCount() == bucketCount, 100,
+  12);
+  Assert.assertTrue(omMetadataManager.countRowsInTable(
+  omMetadataManager.getBucketTable()) == (bucketCount));
+  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+} finally {
+  stop();
+}
+  }
+
+
+  @Test(timeout = 300_000)
+  public void testDoubleBuffer() throws Exception {
+// This test checks whether count in tables are correct or not.
+testDoubleBuffer(1, 10);
+testDoubleBuffer(10, 100);
+testDoubleBuffer(100, 100);
+testDoubleBuffer(1000, 1000);
+  }
+
+
+
+  @Test
+  public void testDoubleBufferWithMixOfTransactions() throws Exception {
+// This test checks count, data in table is correct or not.
+try {
+  setup();
+
+  Queue< OMBucketCreateResponse > bucketQueue =
+  new ConcurrentLinkedQueue<>();
+  Queue< OMBucketDeleteResponse > deleteBucketQueue =
+  new 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247645
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287091533
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -43,467 +41,334 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
-import java.util.concurrent.ExecutionException;
 
 /**
  * An {@link InputStream} used by the REST service in combination with the
  * SCMClient to read the value of a key from a sequence
  * of container chunks.  All bytes of the key value are stored in container
- * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * chunks. Each chunk may contain multiple underlying {@link ByteBuffer}
  * instances.  This class encapsulates all state management for iterating
- * through the sequence of chunks and the sequence of buffers within each 
chunk.
+ * through the sequence of chunks through {@link ChunkInputStream}.
  */
 public class BlockInputStream extends InputStream implements Seekable {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BlockInputStream.class);
+
   private static final int EOF = -1;
 
   private final BlockID blockID;
+  private final long length;
+  private Pipeline pipeline;
+  private final long containerKey;
+  private final Token token;
+  private final boolean verifyChecksum;
   private final String traceID;
   private XceiverClientManager xceiverClientManager;
   private XceiverClientSpi xceiverClient;
-  private List chunks;
-  // ChunkIndex points to the index current chunk in the buffers or the the
-  // index of chunk which will be read next into the buffers in
-  // readChunkFromContainer().
+  private boolean initialized = false;
+
+  // List of ChunkInputStreams, one for each chunk in the block
+  private List chunkStreams;
+
+  // chunkOffsets[i] stores the index of the first data byte in
+  // chunkStream i w.r.t the block data.
+  // Let’s say we have chunk size as 40 bytes. And let's say the parent
+  // block stores data from index 200 and has length 400.
+  // The first 40 bytes of this block will be stored in chunk[0], next 40 in
+  // chunk[1] and so on. But since the chunkOffsets are w.r.t the block only
+  // and not the key, the values in chunkOffsets will be [0, 40, 80,].
+  private long[] chunkOffsets = null;
+
+  // Index of the chunkStream corresponding to the current postion of the
+  // BlockInputStream i.e offset of the data to be read next from this block
   private int chunkIndex;
-  // ChunkIndexOfCurrentBuffer points to the index of chunk read into the
-  // buffers or index of the last chunk in the buffers. It is updated only
-  // when a new chunk is read from container into the buffers.
-  private int chunkIndexOfCurrentBuffer;
-  private long[] chunkOffset;
-  private List buffers;
-  private int bufferIndex;
-  private long bufferPosition;
-  private final boolean verifyChecksum;
 
-  /**
-   * Creates a new BlockInputStream.
-   *
-   * @param blockID block ID of the chunk
-   * @param xceiverClientManager client manager that controls client
-   * @param xceiverClient client to perform container calls
-   * @param chunks list of chunks to read
-   * @param traceID container protocol call traceID
-   * @param verifyChecksum verify checksum
-   * @param initialPosition the initial position of the stream pointer. This
-   *position is seeked now if the up-stream was seeked
-   *before this was created.
-   */
-  public BlockInputStream(
-  BlockID blockID, XceiverClientManager xceiverClientManager,
-  XceiverClientSpi xceiverClient, List chunks, String traceID,
-  boolean verifyChecksum, long initialPosition) throws IOException {
-this.blockID = blockID;
-this.traceID = traceID;
-this.xceiverClientManager = xceiverClientManager;
-this.xceiverClient = xceiverClient;
-this.chunks = chunks;
-this.chunkIndex = 0;
-this.chunkIndexOfCurrentBuffer = -1;
-// chunkOffset[i] stores offset at which chunk i stores data in
-// BlockInputStream
-this.chunkOffset = new long[this.chunks.size()];
-initializeChunkOffset();
-this.buffers = null;
-this.bufferIndex = 0;
-this.bufferPosition = -1;
+  // Position of the BlockInputStream is maintainted by this variable till
+  // the stream is initialized. This postion is w.r.t to the block only and
+  // not the key.
+  // For the above 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247641
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287052916
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 ##
 @@ -53,58 +46,77 @@
 
   private static final int EOF = -1;
 
-  private final ArrayList streamEntries;
-  // streamOffset[i] stores the offset at which blockInputStream i stores
-  // data in the key
-  private long[] streamOffset = null;
-  private int currentStreamIndex;
+  private String key;
   private long length = 0;
   private boolean closed = false;
-  private String key;
 
-  public KeyInputStream() {
-streamEntries = new ArrayList<>();
-currentStreamIndex = 0;
-  }
+  // List of BlockInputStreams, one for each block in the key
+  private final List blockStreams;
 
-  @VisibleForTesting
-  public synchronized int getCurrentStreamIndex() {
-return currentStreamIndex;
-  }
+  // blockOffsets[i] stores the index of the first data byte in
+  // blockStream i w.r.t the key data.
+  // For example, let’s say the block size is 200 bytes and block[0] stores
+  // data from indices 0 - 199, block[1] from indices 200 - 399 and so on.
+  // Then, blockOffset[0] = 0 (the offset of the first byte of data in
+  // block[0]), blockOffset[1] = 200 and so on.
+  private long[] blockOffsets = null;
 
-  @VisibleForTesting
-  public long getRemainingOfIndex(int index) throws IOException {
-return streamEntries.get(index).getRemaining();
+  // Index of the blockStream corresponding to the current position of the
+  // KeyInputStream i.e. offset of the data to be read next
+  private int blockIndex;
+
+  // Tracks the blockIndex corresponding to the last seeked position so that it
+  // can be reset if a new position is seeked.
+  private int blockIndexOfPrevPosition;
+
+  public KeyInputStream() {
+blockStreams = new ArrayList<>();
+blockIndex = 0;
   }
 
   /**
-   * Append another stream to the end of the list.
-   *
-   * @param stream   the stream instance.
-   * @param streamLength the max number of bytes that should be written to this
-   * stream.
+   * For each block in keyInfo, add a BlockInputStream to blockStreams.
*/
-  @VisibleForTesting
-  public synchronized void addStream(BlockInputStream stream,
-  long streamLength) {
-streamEntries.add(new ChunkInputStreamEntry(stream, streamLength));
+  public static LengthInputStream getFromOmKeyInfo(OmKeyInfo keyInfo,
+  XceiverClientManager xceiverClientManager,
+  StorageContainerLocationProtocol storageContainerLocationClient,
 
 Review comment:
   Unused parameter storageContainerLocationClient
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247641)
Time Spent: 3h 20m  (was: 3h 10m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247639=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247639
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287063856
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 ##
 @@ -172,35 +198,40 @@ public void seek(long pos) throws IOException {
   throw new EOFException(
   "EOF encountered at pos: " + pos + " for key: " + key);
 }
-Preconditions.assertTrue(currentStreamIndex >= 0);
-if (currentStreamIndex >= streamEntries.size()) {
-  currentStreamIndex = Arrays.binarySearch(streamOffset, pos);
-} else if (pos < streamOffset[currentStreamIndex]) {
-  currentStreamIndex =
-  Arrays.binarySearch(streamOffset, 0, currentStreamIndex, pos);
-} else if (pos >= streamOffset[currentStreamIndex] + streamEntries
-.get(currentStreamIndex).length) {
-  currentStreamIndex = Arrays
-  .binarySearch(streamOffset, currentStreamIndex + 1,
-  streamEntries.size(), pos);
+Preconditions.assertTrue(blockIndex >= 0);
 
 Review comment:
   Why is this condition added?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247639)
Time Spent: 3h 10m  (was: 3h)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247640
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287072852
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -43,467 +41,334 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
-import java.util.concurrent.ExecutionException;
 
 /**
  * An {@link InputStream} used by the REST service in combination with the
  * SCMClient to read the value of a key from a sequence
  * of container chunks.  All bytes of the key value are stored in container
- * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * chunks. Each chunk may contain multiple underlying {@link ByteBuffer}
 
 Review comment:
   NIT: Existing only, but is java doc needs to be modified? As it is saying 
used by Rest Service and ScmClient.
   This is used by KeyInputStream, and RpcClient also uses this, for reading 
data from container chunks we talk to datanode using storageContainer protocol 
client.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247640)
Time Spent: 3h 20m  (was: 3h 10m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247646
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287051851
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 ##
 @@ -53,58 +46,77 @@
 
   private static final int EOF = -1;
 
-  private final ArrayList streamEntries;
-  // streamOffset[i] stores the offset at which blockInputStream i stores
-  // data in the key
-  private long[] streamOffset = null;
-  private int currentStreamIndex;
+  private String key;
   private long length = 0;
   private boolean closed = false;
-  private String key;
 
-  public KeyInputStream() {
-streamEntries = new ArrayList<>();
-currentStreamIndex = 0;
-  }
+  // List of BlockInputStreams, one for each block in the key
+  private final List blockStreams;
 
-  @VisibleForTesting
-  public synchronized int getCurrentStreamIndex() {
-return currentStreamIndex;
-  }
+  // blockOffsets[i] stores the index of the first data byte in
+  // blockStream i w.r.t the key data.
 
 Review comment:
   NIT: i.e or unwated i?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247646)
Time Spent: 3h 50m  (was: 3h 40m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247643
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287096583
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
+ * encapsulates all state management for iterating through the sequence of
+ * buffers within each chunk.
+ */
+public class ChunkInputStream extends InputStream implements Seekable {
+
+  private final ChunkInfo chunkInfo;
+  private final long length;
+  private final BlockID blockID;
+  private final String traceID;
+  private XceiverClientSpi xceiverClient;
+  private final boolean verifyChecksum;
+  private boolean allocated = false;
+
+  // Buffer to store the chunk data read from the DN container
+  private List buffers;
+
+  // Index of the buffers corresponding to the current postion of the buffers
 
 Review comment:
   NIT: postion -> position
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247643)
Time Spent: 3h 40m  (was: 3.5h)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247642
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287092731
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
 ##
 @@ -43,467 +41,334 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
-import java.util.concurrent.ExecutionException;
 
 /**
  * An {@link InputStream} used by the REST service in combination with the
  * SCMClient to read the value of a key from a sequence
  * of container chunks.  All bytes of the key value are stored in container
- * chunks.  Each chunk may contain multiple underlying {@link ByteBuffer}
+ * chunks. Each chunk may contain multiple underlying {@link ByteBuffer}
  * instances.  This class encapsulates all state management for iterating
- * through the sequence of chunks and the sequence of buffers within each 
chunk.
+ * through the sequence of chunks through {@link ChunkInputStream}.
  */
 public class BlockInputStream extends InputStream implements Seekable {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BlockInputStream.class);
+
   private static final int EOF = -1;
 
   private final BlockID blockID;
+  private final long length;
+  private Pipeline pipeline;
+  private final long containerKey;
+  private final Token token;
+  private final boolean verifyChecksum;
   private final String traceID;
   private XceiverClientManager xceiverClientManager;
   private XceiverClientSpi xceiverClient;
-  private List chunks;
-  // ChunkIndex points to the index current chunk in the buffers or the the
-  // index of chunk which will be read next into the buffers in
-  // readChunkFromContainer().
+  private boolean initialized = false;
+
+  // List of ChunkInputStreams, one for each chunk in the block
+  private List chunkStreams;
+
+  // chunkOffsets[i] stores the index of the first data byte in
+  // chunkStream i w.r.t the block data.
+  // Let’s say we have chunk size as 40 bytes. And let's say the parent
+  // block stores data from index 200 and has length 400.
+  // The first 40 bytes of this block will be stored in chunk[0], next 40 in
+  // chunk[1] and so on. But since the chunkOffsets are w.r.t the block only
+  // and not the key, the values in chunkOffsets will be [0, 40, 80,].
+  private long[] chunkOffsets = null;
+
+  // Index of the chunkStream corresponding to the current postion of the
+  // BlockInputStream i.e offset of the data to be read next from this block
   private int chunkIndex;
-  // ChunkIndexOfCurrentBuffer points to the index of chunk read into the
-  // buffers or index of the last chunk in the buffers. It is updated only
-  // when a new chunk is read from container into the buffers.
-  private int chunkIndexOfCurrentBuffer;
-  private long[] chunkOffset;
-  private List buffers;
-  private int bufferIndex;
-  private long bufferPosition;
-  private final boolean verifyChecksum;
 
-  /**
-   * Creates a new BlockInputStream.
-   *
-   * @param blockID block ID of the chunk
-   * @param xceiverClientManager client manager that controls client
-   * @param xceiverClient client to perform container calls
-   * @param chunks list of chunks to read
-   * @param traceID container protocol call traceID
-   * @param verifyChecksum verify checksum
-   * @param initialPosition the initial position of the stream pointer. This
-   *position is seeked now if the up-stream was seeked
-   *before this was created.
-   */
-  public BlockInputStream(
-  BlockID blockID, XceiverClientManager xceiverClientManager,
-  XceiverClientSpi xceiverClient, List chunks, String traceID,
-  boolean verifyChecksum, long initialPosition) throws IOException {
-this.blockID = blockID;
-this.traceID = traceID;
-this.xceiverClientManager = xceiverClientManager;
-this.xceiverClient = xceiverClient;
-this.chunks = chunks;
-this.chunkIndex = 0;
-this.chunkIndexOfCurrentBuffer = -1;
-// chunkOffset[i] stores offset at which chunk i stores data in
-// BlockInputStream
-this.chunkOffset = new long[this.chunks.size()];
-initializeChunkOffset();
-this.buffers = null;
-this.bufferIndex = 0;
-this.bufferPosition = -1;
+  // Position of the BlockInputStream is maintainted by this variable till
+  // the stream is initialized. This postion is w.r.t to the block only and
+  // not the key.
+  // For the above 

[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247644
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287094736
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
 
 Review comment:
   Same as above java doc description needs modification.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247644)
Time Spent: 3h 40m  (was: 3.5h)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247637
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287052394
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 ##
 @@ -53,58 +46,77 @@
 
   private static final int EOF = -1;
 
-  private final ArrayList streamEntries;
-  // streamOffset[i] stores the offset at which blockInputStream i stores
-  // data in the key
-  private long[] streamOffset = null;
-  private int currentStreamIndex;
+  private String key;
   private long length = 0;
   private boolean closed = false;
-  private String key;
 
-  public KeyInputStream() {
-streamEntries = new ArrayList<>();
-currentStreamIndex = 0;
-  }
+  // List of BlockInputStreams, one for each block in the key
+  private final List blockStreams;
 
-  @VisibleForTesting
-  public synchronized int getCurrentStreamIndex() {
-return currentStreamIndex;
-  }
+  // blockOffsets[i] stores the index of the first data byte in
+  // blockStream i w.r.t the key data.
+  // For example, let’s say the block size is 200 bytes and block[0] stores
+  // data from indices 0 - 199, block[1] from indices 200 - 399 and so on.
 
 Review comment:
   NIT: blockStream[1]
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247637)
Time Spent: 3h  (was: 2h 50m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=247638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247638
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 23/May/19 19:40
Start Date: 23/May/19 19:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #804: 
HDDS-1496. Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r287088418
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
 ##
 @@ -146,22 +160,34 @@ public synchronized int read(byte[] b, int off, int len) 
throws IOException {
 // this case.
 throw new IOException(String.format(
 "Inconsistent read for blockID=%s length=%d numBytesRead=%d",
-current.blockInputStream.getBlockID(), current.length,
-numBytesRead));
+current.getBlockID(), current.getLength(), numBytesRead));
   }
   totalReadLen += numBytesRead;
   off += numBytesRead;
   len -= numBytesRead;
   if (current.getRemaining() <= 0 &&
-  ((currentStreamIndex + 1) < streamEntries.size())) {
-currentStreamIndex += 1;
+  ((blockIndex + 1) < blockStreams.size())) {
+blockIndex += 1;
   }
 }
 return totalReadLen;
   }
 
+  /**
+   * Seeks the KeyInputStream to the specified position. This involves 2 steps:
+   *1. Updating the blockIndex to the blockStream corresponding to the
+   *seeked position.
+   *2. Seeking the corresponding blockStream to the adjusted position.
+   *
+   * For example, let’s say the block size is 200 bytes and block[0] stores
+   * data from indices 0 - 199, block[1] from indices 200 - 399 and so on.
+   * Let’s say we seek to position 240. In the first step, the blockIndex
+   * would be updated to 1 as indices 200 - 399 reside in blockStream[1]. In
+   * the second step, the blockStream[1] would be seeked to position 40 (=
+   * 240 - blockOffset[1] (= 200)).
+   */
   @Override
-  public void seek(long pos) throws IOException {
+  public synchronized void seek(long pos) throws IOException {
 checkNotClosed();
 
 Review comment:
   NIT: Not changed by this patch, can we rename this to checkOpen() similar to 
other inputStreams?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247638)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-23 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846996#comment-16846996
 ] 

CR Hota commented on HDFS-13955:


[~elgoiri] [~ayushtkn] [~tasanuma] Thanks for the reviews.

That's a good point you brought up about mem leak. Added serviceStop in 
HDFS-13955-HDFS-13891.004.patch. The test case failure is unrelated and is 
tracked in HDFS-14461.

 

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846981#comment-16846981
 ] 

Íñigo Goiri commented on HDFS-13909:


Thanks [~ayushtkn], this looks pretty much it.
Is it really needed to pass the full map as dstMap?
Is there a way to pass the {{CacheDirectiveInfo}} in some other way?
Otherwise, this is fine with me, maybe refactor the creation the map into a 
function.

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846977#comment-16846977
 ] 

Hadoop QA commented on HDFS-13787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969540/HDFS-13787-HDFS-13891.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 90cdca30ee0f 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26830/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26830/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26830/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| 

[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: (was: HDFS-13787-HDFS-13891.004.patch)

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.005.patch, 
> HDFS-13787-HDFS-13891.006.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: HDFS-13787-HDFS-13891.006.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.004.patch, 
> HDFS-13787-HDFS-13891.005.patch, HDFS-13787-HDFS-13891.006.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: (was: HDFS-13787-HDFS-13891.006.patch)

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.004.patch, 
> HDFS-13787-HDFS-13891.005.patch, HDFS-13787-HDFS-13891.006.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846968#comment-16846968
 ] 

Íñigo Goiri commented on HDFS-13787:


[~RANith], we've been having some collissions here.
I uploaded  [^HDFS-13787-HDFS-13891.006.patch] merging your patch with mine.
This one should come clean.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.004.patch, 
> HDFS-13787-HDFS-13891.005.patch, HDFS-13787-HDFS-13891.006.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: HDFS-13787-HDFS-13891.006.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.004.patch, 
> HDFS-13787-HDFS-13891.005.patch, HDFS-13787-HDFS-13891.006.patch, 
> HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846966#comment-16846966
 ] 

Ayush Saxena commented on HDFS-13909:
-

Really Sorry, I didn't catch up.
Have uploaded the patch extending the behaviour in RemoteParam, I guess this 
time I have got what you said.
Let me know what else changes are required!!! 

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13909) RBF: Add Cache pools and directives related ClientProtocol apis

2019-05-23 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13909:

Attachment: HDFS-13909-HDFS-13891-03.patch

> RBF: Add Cache pools and directives related ClientProtocol apis
> ---
>
> Key: HDFS-13909
> URL: https://issues.apache.org/jira/browse/HDFS-13909
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13909-HDFS-13891-01.patch, 
> HDFS-13909-HDFS-13891-02.patch, HDFS-13909-HDFS-13891-03.patch
>
>
> Currently addCachePool, modifyCachePool, removeCachePool, listCachePools, 
> addCacheDirective, modifyCacheDirective, removeCacheDirective, 
> listCacheDirectives these APIs are not implemented in Router.
> This JIRA is intend to implement above mentioned APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846964#comment-16846964
 ] 

Hadoop QA commented on HDFS-13955:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 39s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13955 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969535/HDFS-13955-HDFS-13891.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3c47d91c7fdb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26828/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26828/testReport/ |
| Max. process+thread count | 1360 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=247619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247619
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 23/May/19 18:42
Start Date: 23/May/19 18:42
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#discussion_r287022436
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 ##
 @@ -146,7 +146,7 @@
 
   public static final String DFS_RATIS_SNAPSHOT_THRESHOLD_KEY =
   "dfs.ratis.snapshot.threshold";
-  public static final long DFS_RATIS_SNAPSHOT_THRESHOLD_DEFAULT = 1;
+  public static final long DFS_RATIS_SNAPSHOT_THRESHOLD_DEFAULT = 10;
 
 Review comment:
   Hi @jiwq, thanks for the review. While doing performance testing with 160 
client threads writing small files to a 50 node cluster continuously, we 
observed the snapshot is taken a bit too often. Hence, this change to update 
the default limit with my change being in a very related area.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247619)
Time Spent: 1h 10m  (was: 1h)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846938#comment-16846938
 ] 

Hadoop QA commented on HDFS-13787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 84 new + 6 unchanged - 0 fixed = 90 total (was 6) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 39 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-13787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969533/HDFS-13787-HDFS-13891.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 40a7a27e1c3b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 4a16a08 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26827/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| compile | 

[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2019-05-23 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Attachment: HDFS-13787-HDFS-13891.004.patch

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13787-HDFS-13891.003.patch, 
> HDFS-13787-HDFS-13891.004.patch, HDFS-13787-HDFS-13891.004.patch, 
> HDFS-13787-HDFS-13891.005.patch, HDFS-13787.001.patch, HDFS-13787.002.patch
>
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1535) Space tracking for Open Containers : Handle Node Startup

2019-05-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846930#comment-16846930
 ] 

Hudson commented on HDDS-1535:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16595 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16595/])
HDDS-1535. Space tracking for Open Containers : Handle Node Startup. (sdeka: 
rev 869a1ab41a7c817e3f5f9bb5c74a93b68e5d2af4)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java


> Space tracking for Open Containers : Handle Node Startup
> 
>
> Key: HDDS-1535
> URL: https://issues.apache.org/jira/browse/HDDS-1535
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is related to HDDS-1511
> Space tracking for Open Containers (committed space in the volume) relies on 
> usedBytes in the Container state. usedBytes is not persisted for every update 
> (chunkWrite). So on a node restart the value is stale.
> The proposal is to:
> iterate the block DB for each open container during startup and compute the 
> used space.
> The block DB process will be accelerated by spawning executors for each 
> container.
> This process will be carried out as part of building the container set during 
> startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-23 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846931#comment-16846931
 ] 

Hudson commented on HDDS-700:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16595 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16595/])
HDDS-700. Support rack awared node placement policy based on network (xyao: rev 
20a4ec351c51da3459423852abea1d6c0e3097e3)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementCapacity.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRandom.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMCommonPolicy.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/ContainerPlacementPolicy.java


> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-700.01.patch, HDDS-700.02.patch, HDDS-700.03.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1559?focusedWorklogId=247597=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247597
 ]

ASF GitHub Bot logged work on HDDS-1559:


Author: ASF GitHub Bot
Created on: 23/May/19 18:00
Start Date: 23/May/19 18:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #841: HDDS-1559. Include 
committedBytes to determine Out of Space in VolumeChoosingPolicy. Contributed 
by Supratim Deka
URL: https://github.com/apache/hadoop/pull/841#issuecomment-495322801
 
 
   +1 for the patch. @supratimdeka can you look at the merge conflicts?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247597)
Time Spent: 40m  (was: 0.5h)

> Include committedBytes to determine Out of Space in VolumeChoosingPolicy
> 
>
> Key: HDDS-1559
> URL: https://issues.apache.org/jira/browse/HDDS-1559
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a follow-up from HDDS-1511 and HDDS-1535
> Currently  when creating a new Container, the DN invokes 
> RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for 
> (volume available space > container max size). If no eligible volume is 
> found, the policy throws a DiskOutOfSpaceException. This is the current 
> behaviour.
> However, the computation of available space does not take into consideration 
> the space
> that is going to be consumed by writes to existing containers which are still 
> Open and accepting chunk writes.
> This Jira proposes to enhance the space availability check in chooseVolume by 
> inclusion of committed space(committedBytes in HddsVolume) in the equation.
> The handling/management of the exception in Ratis will not be modified in 
> this Jira. That will be scoped separately as part of Datanode IO Failure 
> handling work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1555) Disable install snapshot for ContainerStateMachine

2019-05-23 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1555?focusedWorklogId=247589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-247589
 ]

ASF GitHub Bot logged work on HDDS-1555:


Author: ASF GitHub Bot
Created on: 23/May/19 17:50
Start Date: 23/May/19 17:50
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #846: HDDS-1555. 
Disable install snapshot for ContainerStateMachine.
URL: https://github.com/apache/hadoop/pull/846#discussion_r287061991
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -597,4 +603,12 @@ void handleNodeSlowness(RaftGroup group, RoleInfoProto 
roleInfoProto) {
   void handleNoLeader(RaftGroup group, RoleInfoProto roleInfoProto) {
 handlePipelineFailure(group.getGroupId(), roleInfoProto);
   }
+
+  void handleInstallSnapshotFromLeader(RaftGroup group,
+  RoleInfoProto roleInfoProto, TermIndex firstTermIndexInLog) {
+LOG.warn("Install snapshot notification received from Leader with " +
+"termIndex : " + firstTermIndexInLog +
+", terminating pipeline " + group.getGroupId());
+handlePipelineFailure(group.getGroupId(), roleInfoProto);
 
 Review comment:
   Yes sorry, let me add to the description.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 247589)
Time Spent: 1h  (was: 50m)

> Disable install snapshot for ContainerStateMachine
> --
>
> Key: HDDS-1555
> URL: https://issues.apache.org/jira/browse/HDDS-1555
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In case a follower lags behind the leader by a large number, the leader tries 
> to send the snapshot to the follower. For ContainerStateMachine, the 
> information in the snapshot it not the entire state machine data. 
> InstallSnapshot for ContainerStateMachine should be disabled.
> {code}
> 2019-05-19 10:58:22,198 WARN  server.GrpcLogAppender 
> (GrpcLogAppender.java:installSnapshot(423)) - 
> GrpcLogAppender(e3e19760-1340-4acd-b50d-f8a796a97254->28d9bd2f-3fe2-4a69-8120-757a00fa2f20):
>  failed to install snapshot 
> [/Users/msingh/code/apache/ozone/github/git_oz_bugs_fixes/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-c2a863ef-8be9-445c-886f-57cad3a7b12e/datanode-6/data/ratis/fb88b749-3e75-4381-8973-6e0cb4904c7e/sm/snapshot.2_190]:
>  {}
> java.lang.NullPointerException
> at 
> org.apache.ratis.server.impl.LogAppender.readFileChunk(LogAppender.java:369)
> at 
> org.apache.ratis.server.impl.LogAppender.access$1100(LogAppender.java:54)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:318)
> at 
> org.apache.ratis.server.impl.LogAppender$SnapshotRequestIter$1.next(LogAppender.java:303)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.installSnapshot(GrpcLogAppender.java:412)
> at 
> org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:101)
> at 
> org.apache.ratis.server.impl.LogAppender$AppenderDaemon.run(LogAppender.java:80)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1535) Space tracking for Open Containers : Handle Node Startup

2019-05-23 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1535.
-
  Resolution: Fixed
   Fix Version/s: 0.5.0
Target Version/s:   (was: 0.5.0)

I've committed this. Thanks for the contribution [~sdeka].

> Space tracking for Open Containers : Handle Node Startup
> 
>
> Key: HDDS-1535
> URL: https://issues.apache.org/jira/browse/HDDS-1535
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This is related to HDDS-1511
> Space tracking for Open Containers (committed space in the volume) relies on 
> usedBytes in the Container state. usedBytes is not persisted for every update 
> (chunkWrite). So on a node restart the value is stale.
> The proposal is to:
> iterate the block DB for each open container during startup and compute the 
> used space.
> The block DB process will be accelerated by spawning executors for each 
> container.
> This process will be carried out as part of building the container set during 
> startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >