[GitHub] [hadoop-ozone] avijayanhwx commented on issue #684: HDDS-3134.Debug Tool that gives chunk location information given a key.

2020-03-16 Thread GitBox
avijayanhwx commented on issue #684: HDDS-3134.Debug Tool that gives chunk 
location information given a key.
URL: https://github.com/apache/hadoop-ozone/pull/684#issuecomment-599892077
 
 
   @sadanand48 Please fix the failing CI issues in this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: By s3gateway, write a 187MB file cost 5 seconds, but read it 
cost 17 seconds. The write and read request both split the 187MB file into 24 
parts, write request write all the 24 parts in parallel, read request read 10 
parts in parallel and then read the left 14 parts in sequential. So read cost 2 
times more than write.  (was: By s3gateway, write a 187MB file cost 5 seconds, 
but read it cost 17 seconds. The write and read request both split the 187MB 
file into 24 parts, write request write all the 24 parts in parallel, the read 
request read 10 parts in parallel and then read the left 14 parts in 
sequential. So read cost 2 times more than write.)

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request both split the 187MB file into 24 parts, write 
> request write all the 24 parts in parallel, read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3214) Unhealthy datanodes repeatedly participate in pipeline creation

2020-03-16 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3214:

Summary: Unhealthy datanodes repeatedly participate in pipeline creation  
(was: unhealthy datanodes repeatedly participate in pipeline creation)

> Unhealthy datanodes repeatedly participate in pipeline creation
> ---
>
> Key: HDDS-3214
> URL: https://issues.apache.org/jira/browse/HDDS-3214
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Priority: Major
>  Labels: fault_injection
>
> steps taken :
> 1) Mounted noise injection FUSE on all datanodes
> 2) Selected 1 datanode from each open pipeline (factor=3)
> 3) Injected WRITE FAILURE noise with error code - ENOENT on 
> "hdds.datanode.dir" path of list of datanodes selected in step 2)
> 4) start PUT key operation of size  32 MB.
>  
> Observation :
> 
>  # Commit failed, pipelines were moved to exclusion list.
>  # Client retries , new pipeline is created with same set of datanodes. 
> Container creation fails as WRITE  FAILURE injection present.
>  # Pipeline is closed and the process is repeated for 
> "ozone.client.max.retries" retries.
> Everytime, same set of datanodes are selected for pipeline creation which 
> include 1 unhealthy datanode. 
> Expectation - pipeline should have been created by selecting 3 healthy  
> datanodes available.
>  
> cc - [~ljain]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-03-16 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060612#comment-17060612
 ] 

runzhiwang commented on HDDS-3223:
--

I'm working on it.

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request both split the 187MB file into 24 parts, write 
> request write all the 24 parts in parallel, the read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reassigned HDDS-3223:


Assignee: runzhiwang

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request both split the 187MB file into 24 parts, write 
> request write all the 24 parts in parallel, the read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: By s3gateway, write a 187MB file cost 5 seconds, but read it 
cost 17 seconds. The write and read request both split the 187MB file into 24 
parts, write request write all the 24 parts in parallel, the read request read 
10 parts in parallel and then read the left 14 parts in sequential. So read 
cost 2 times more than write.  (was: By s3gateway, write a 187MB file cost 5 
seconds, but read it cost 17 seconds. The write and read request split the 
187MB file into 24 parts, write request write all the 24 parts in parallel, the 
read request read 10 parts in parallel and then read the left 14 parts in 
sequential. So read cost 2 times more than write.)

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request both split the 187MB file into 24 parts, write 
> request write all the 24 parts in parallel, the read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Summary: Read a big object cost 2 times more than write it by s3g  (was: 
Improve the read efficiency of big object in s3g)

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request split the 187MB file into 24 parts, write request 
> write all the 24 parts in parallel, the read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: By s3gateway, write a 187MB file cost 5 seconds, but read it 
cost 17 seconds. The write and read request split the 187MB file into 24 parts, 
write request write all the 24 parts in parallel, the read request read 10 
parts in parallel and then read the left 14 parts in sequential. So read cost 2 
times more than write.  (was: By s3gateway, write a 200MB file cost 5 seconds, 
but read it cost 17 seconds. And )

> Improve the read efficiency of big object in s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> By s3gateway, write a 187MB file cost 5 seconds, but read it cost 17 seconds. 
> The write and read request split the 187MB file into 24 parts, write request 
> write all the 24 parts in parallel, the read request read 10 parts in 
> parallel and then read the left 14 parts in sequential. So read cost 2 times 
> more than write.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: By s3gateway, write a 200MB file cost 5 seconds, but read it 
cost 17 seconds. And 

> Improve the read efficiency of big object in s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>
> By s3gateway, write a 200MB file cost 5 seconds, but read it cost 17 seconds. 
> And 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Issue Type: Improvement  (was: Bug)

> Improve the read efficiency of big object in s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3223) Improve the read efficiency of big object in s3g

2020-03-16 Thread runzhiwang (Jira)
runzhiwang created HDDS-3223:


 Summary: Improve the read efficiency of big object in s3g
 Key: HDDS-3223
 URL: https://issues.apache.org/jira/browse/HDDS-3223
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: runzhiwang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline into one  (was: 
Merge a lot of RPC call getContainerWithPipeline)

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Description:  !screenshot-3.png!   (was: Read a 100MB object which has 25 
chunks, the jaeger trace information as the image shows. Now ozone read each 
chunk in sequential order, it can be improve by reading chunks in parallel. And 
the rpc call is also too many, it can be improve by one rpc call and return the 
result by batch.  !screenshot-3.png! )

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline  (was: Merge a 
lot of RPC call getContainerWithPipeline into one)

> Merge a lot of RPC call getContainerWithPipeline
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
>  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call getContainerWithPipeline into one

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call getContainerWithPipeline into one  (was: 
Merge a lot of RPC call )

> Merge a lot of RPC call getContainerWithPipeline into one
> -
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which has 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3168) Merge a lot of RPC call

2020-03-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3168:
-
Summary: Merge a lot of RPC call   (was: Improve the efficiency of reading 
object)

> Merge a lot of RPC call 
> 
>
> Key: HDDS-3168
> URL: https://issues.apache.org/jira/browse/HDDS-3168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> Read a 100MB object which has 25 chunks, the jaeger trace information as the 
> image shows. Now ozone read each chunk in sequential order, it can be improve 
> by reading chunks in parallel. And the rpc call is also too many, it can be 
> improve by one rpc call and return the result by batch.  !screenshot-3.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #678: HDDS-3179 Pipeline placement based on Topology does not have fallback

2020-03-16 Thread GitBox
timmylicheng commented on a change in pull request #678: HDDS-3179 Pipeline 
placement based on Topology does not have fallback
URL: https://github.com/apache/hadoop-ozone/pull/678#discussion_r393411067
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -99,9 +99,11 @@ boolean meetCriteria(DatanodeDetails datanodeDetails, int 
nodesRequired) {
   try {
 pipeline = stateManager.getPipeline(pid);
   } catch (PipelineNotFoundException e) {
-LOG.error("Pipeline not found in pipeline state manager during" +
-" pipeline creation. PipelineID: " + pid +
-" exception: " + e.getMessage());
+if (LOG.isDebugEnabled()) {
 
 Review comment:
   Thanks for the info. How about the logs without "{}" ? Shall we check if 
debug is enabled?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #678: HDDS-3179 Pipeline placement based on Topology does not have fallback

2020-03-16 Thread GitBox
timmylicheng commented on a change in pull request #678: HDDS-3179 Pipeline 
placement based on Topology does not have fallback
URL: https://github.com/apache/hadoop-ozone/pull/678#discussion_r393411067
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -99,9 +99,11 @@ boolean meetCriteria(DatanodeDetails datanodeDetails, int 
nodesRequired) {
   try {
 pipeline = stateManager.getPipeline(pid);
   } catch (PipelineNotFoundException e) {
-LOG.error("Pipeline not found in pipeline state manager during" +
-" pipeline creation. PipelineID: " + pid +
-" exception: " + e.getMessage());
+if (LOG.isDebugEnabled()) {
 
 Review comment:
   Thanks for the info. How about the logs without "{}" ? Shall we check if 
debug is enabled? @sodonnel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3171) Couldn't create RpcClient protocol exception on k8s.

2020-03-16 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060560#comment-17060560
 ] 

Xiaoyu Yao commented on HDDS-3171:
--

I think this is related to the K8s environment where the name resolution is 
unreliable for scm host when om rpc client is making the call. you might want 
to take a look at the K8s node logs like mentioned here: 
https://github.com/kubernetes/kubernetes/issues/76790.




> Couldn't create RpcClient protocol exception on k8s.
> 
>
> Key: HDDS-3171
> URL: https://issues.apache.org/jira/browse/HDDS-3171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Start ozone on k8s, sometimes it report Couldn't create RpcClient protocol 
> exception on k8s, and sometimes not.
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #571: HDDS-3042. Support running full Ratis pipeline from IDE (IntelliJ)

2020-03-16 Thread GitBox
xiaoyuyao commented on a change in pull request #571: HDDS-3042. Support 
running full Ratis pipeline from IDE (IntelliJ)
URL: https://github.com/apache/hadoop-ozone/pull/571#discussion_r393391039
 
 

 ##
 File path: hadoop-ozone/dev-support/intellij/runConfigurations/Datanode2.xml
 ##
 @@ -0,0 +1,35 @@
+
+
+  
+
 
 Review comment:
   Some setup may not have JRE 13. Should we keep JDK8 as a minimum supported 
Java version that is common on Mac Dev box? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3180) Datanode fails to start due to confused inconsistent volume state

2020-03-16 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060552#comment-17060552
 ] 

Yiqun Lin commented on HDDS-3180:
-

Thanks [~xyao] for the review and merge.

> Datanode fails to start due to confused inconsistent volume state
> -
>
> Key: HDDS-3180
> URL: https://issues.apache.org/jira/browse/HDDS-3180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I meet an error in my testing ozone cluster when I restart datanode. From the 
> log, it throws inconsistent volume state but without other detailed helpful 
> info:
> {noformat}
> 2020-03-14 02:31:46,204 [main] INFO  (LogAdapter.java:51) - registered 
> UNIX signal handlers for [TERM, HUP, INT]
> 2020-03-14 02:31:46,736 [main] INFO  (HddsDatanodeService.java:204) - 
> HddsDatanodeService host:lyq-xx.xx.xx.xx ip:xx.xx.xx.xx
> 2020-03-14 02:31:46,784 [main] INFO  (HddsVolume.java:177) - Creating 
> Volume: /tmp/hadoop-hdfs/dfs/data/hdds of storage type : DISK and capacity : 
> 20063645696
> 2020-03-14 02:31:46,786 [main] ERROR (MutableVolumeSet.java:202) - Failed 
> to parse the storage location: file:///tmp/hadoop-hdfs/dfs/data
> java.io.IOException: Volume is in an INCONSISTENT state. Skipped loading 
> volume: /tmp/hadoop-hdfs/dfs/data/hdds
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.initialize(HddsVolume.java:226)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.(HddsVolume.java:180)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.(HddsVolume.java:71)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume$Builder.build(HddsVolume.java:158)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.createVolume(MutableVolumeSet.java:336)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.initializeVolumeSet(MutableVolumeSet.java:183)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.(MutableVolumeSet.java:139)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.(MutableVolumeSet.java:111)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:97)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:128)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:235)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:179)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:154)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:78)
> at picocli.CommandLine.execute(CommandLine.java:1173)
> at picocli.CommandLine.access$800(CommandLine.java:141)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
> at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
> at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
> at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
> at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:137)
> 2020-03-14 02:31:46,795 [shutdown-hook-0] INFO  (LogAdapter.java:51) - 
> SHUTDOWN_MSG:
> {noformat}
> Then I look into the code and the root cause is that the version file was 
> lost in that node.
> We need to log key message as well to help user quickly know the root cause 
> of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #690: HDDS-3221. Refactor SafeModeHandler to use a Notification Interface

2020-03-16 Thread GitBox
bharatviswa504 commented on a change in pull request #690: HDDS-3221. Refactor 
SafeModeHandler to use a Notification Interface
URL: https://github.com/apache/hadoop-ozone/pull/690#discussion_r393379759
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/SafeModeTransition.java
 ##
 @@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.safemode;
+
+/**
+ * Interface which should be implemented by any object that wishes to be
+ * notified by the SafeModeManager when the safe mode state changes.
+ */
+public interface SafeModeTransition {
 
 Review comment:
   NIT: Can we rename this to as SafeModeNotification. As the classes which 
need to receive this notification will implement them. Instead of specifying 
Transition in the name of the interface. Thoughts?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #690: HDDS-3221. Refactor SafeModeHandler to use a Notification Interface

2020-03-16 Thread GitBox
bharatviswa504 commented on a change in pull request #690: HDDS-3221. Refactor 
SafeModeHandler to use a Notification Interface
URL: https://github.com/apache/hadoop-ozone/pull/690#discussion_r393379759
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/SafeModeTransition.java
 ##
 @@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.safemode;
+
+/**
+ * Interface which should be implemented by any object that wishes to be
+ * notified by the SafeModeManager when the safe mode state changes.
+ */
+public interface SafeModeTransition {
 
 Review comment:
   NIT: Can we rename this to as SafeModeNotification


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3222) Add integrations test for Recon FSCK.

2020-03-16 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-3222:
---

 Summary: Add integrations test for Recon FSCK.
 Key: HDDS-3222
 URL: https://issues.apache.org/jira/browse/HDDS-3222
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.6.0
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.6.0


Recon tracks the containers that are missing in the cluster. We have to add an 
integration tests that mimics this scenario to make sure there are no 
regressions along Recon's receipt of this information and subsequent 
processing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3222) Add integration test for Recon FSCK.

2020-03-16 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3222:

Summary: Add integration test for Recon FSCK.  (was: Add integrations test 
for Recon FSCK.)

> Add integration test for Recon FSCK.
> 
>
> Key: HDDS-3222
> URL: https://issues.apache.org/jira/browse/HDDS-3222
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.6.0
>
>
> Recon tracks the containers that are missing in the cluster. We have to add 
> an integration tests that mimics this scenario to make sure there are no 
> regressions along Recon's receipt of this information and subsequent 
> processing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060534#comment-17060534
 ] 

YiSheng Lien commented on HDDS-3056:


Thank you [~smeng] for the comment.
Yeah, I would close the HDDS-2385 after HDDS-3056 fixed.

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
vivekratnavel commented on issue #681: HDDS-3153. Create REST API to serve 
Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#issuecomment-599803921
 
 
   Thanks @avijayanhwx for the review and commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3221) Refactor SafeModeHandler to use a Notification Interface

2020-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3221:
-
Labels: pull-request-available  (was: )

> Refactor SafeModeHandler to use a Notification Interface
> 
>
> Key: HDDS-3221
> URL: https://issues.apache.org/jira/browse/HDDS-3221
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.6.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> The SafeModeHandler currently accepts several objects which it notifies when 
> the safe mode status changes.
> Each of these object are notified using a different method (there is no 
> "notification interface") and some of the logic which really belongs in those 
> objects (ie what to do when safemode goes on or off) is in the safemode 
> classes rather than in the receiving class.
> As we may need to extend safemode somewhat to delay pipeline creation until 
> sufficient nodes have registered, I think it is worthwhile to refactor this 
> area to do the following:
> 1. Introduce a new Interface "SafeModeTransition" which must be implemented 
> by any object which wants to listen for safemode starting or ending.
> {code}
> public interface SafeModeTransition {
>   void handleSafeModeTransition(SCMSafeModeManager.SafeModeStatus status);
> }
> {code}
> 2. Pass the SafeModeStatus object over this new interface. That way, we can 
> extend SafeModeStatus to include more states in the future than just safemode 
> = true / false.
> 3. Change the constructor of SafeModeHandler to allow any number of objects 
> to be registered to make it more flexible going forward.
> 4. Ensure the logic of what action to take on safemode transition lives 
> within the notified objects rather than in the Safemode clases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel opened a new pull request #690: HDDS-3221. Refactor SafeModeHandler to use a Notification Interface

2020-03-16 Thread GitBox
sodonnel opened a new pull request #690: HDDS-3221. Refactor SafeModeHandler to 
use a Notification Interface
URL: https://github.com/apache/hadoop-ozone/pull/690
 
 
   ## What changes were proposed in this pull request?
   
   The SafeModeHandler currently accepts several objects which it notifies when 
the safe mode status changes.
   
   Each of these object are notified using a different method (there is no 
"notification interface") and some of the logic which really belongs in those 
objects (ie what to do when safemode goes on or off) is in the safemode classes 
rather than in the receiving class.
   
   As we may need to extend safemode somewhat to delay pipeline creation until 
sufficient nodes have registered, I think it is worthwhile to refactor this 
area to do the following:
   
   1. Introduce a new Interface "SafeModeTransition" which must be implemented 
by any object which wants to listen for safemode starting or ending.
   ```
   public interface SafeModeTransition {
 void handleSafeModeTransition(SCMSafeModeManager.SafeModeStatus status);
   }
   ```
   2. Pass the SafeModeStatus object over this new interface. That way, we can 
extend SafeModeStatus to include more states in the future than just safemode = 
true / false.
   
   3. Change the constructor of SafeModeHandler to allow any number of objects 
to be registered to make it more flexible going forward.
   
   4. Ensure the logic of what action to take on safemode transition lives 
within the notified objects rather than in the Safemode clases.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3221
   
   ## How was this patch tested?
   
   Depends on existing unit tests
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3221) Refactor SafeModeHandler to use a Notification Interface

2020-03-16 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-3221:
---

 Summary: Refactor SafeModeHandler to use a Notification Interface
 Key: HDDS-3221
 URL: https://issues.apache.org/jira/browse/HDDS-3221
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Affects Versions: 0.6.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


The SafeModeHandler currently accepts several objects which it notifies when 
the safe mode status changes.

Each of these object are notified using a different method (there is no 
"notification interface") and some of the logic which really belongs in those 
objects (ie what to do when safemode goes on or off) is in the safemode classes 
rather than in the receiving class.

As we may need to extend safemode somewhat to delay pipeline creation until 
sufficient nodes have registered, I think it is worthwhile to refactor this 
area to do the following:

1. Introduce a new Interface "SafeModeTransition" which must be implemented by 
any object which wants to listen for safemode starting or ending.

{code}
public interface SafeModeTransition {
  void handleSafeModeTransition(SCMSafeModeManager.SafeModeStatus status);
}
{code}

2. Pass the SafeModeStatus object over this new interface. That way, we can 
extend SafeModeStatus to include more states in the future than just safemode = 
true / false.

3. Change the constructor of SafeModeHandler to allow any number of objects to 
be registered to make it more flexible going forward.

4. Ensure the logic of what action to take on safemode transition lives within 
the notified objects rather than in the Safemode clases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #520: HDDS-2793. concept/Datanodes.md translation

2020-03-16 Thread GitBox
xiaoyuyao commented on issue #520: HDDS-2793. concept/Datanodes.md translation
URL: https://github.com/apache/hadoop-ozone/pull/520#issuecomment-599797949
 
 
   I check the acceptance test logs which points to the om-ha tests with 
permission issue. This does not seem to relate to the document change here.  
cc: @hanishakoneru and @bharatviswa504  
   
   
   2019-11-16T11:59:00.0117168Z 
==
   2019-11-16T11:59:00.0120741Z ozone-om-ha-testOMHA :: Smoketest ozone cluster 
startup   
   2019-11-16T11:59:00.0121273Z 
==
   2019-11-16T11:59:12.8359463Z Stop Leader OM and Verify Failover  
  | FAIL |
   2019-11-16T11:59:12.8360383Z OSError: [Errno 13] Permission denied: 
'/opt/hadoop/tmpsCDIs5'
   2019-11-16T11:59:12.8360708Z 
--
   2019-11-16T12:01:07.8768425Z Restart OM and Verify Ratis Logs
  | PASS |
   2019-11-16T12:01:07.8772261Z 
--
   2019-11-16T12:01:24.9714368Z Test Multiple Failovers 
  | FAIL |
   2019-11-16T12:01:24.9715403Z OSError: [Errno 13] Permission denied: 
'/opt/hadoop/tmpLn9PKJ'
   2019-11-16T12:01:24.9715697Z 
--
   2019-11-16T12:01:24.9742716Z ozone-om-ha-testOMHA :: Smoketest ozone cluster 
startup   | FAIL |
   2019-11-16T12:01:24.9749529Z 3 critical tests, 1 passed, 2 failed
   2019-11-16T12:01:24.9749662Z 3 tests total, 1 passed, 2 failed
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline placement based on Topology does not have fallback

2020-03-16 Thread GitBox
sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline 
placement based on Topology does not have fallback
URL: https://github.com/apache/hadoop-ozone/pull/678#discussion_r393355949
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -414,6 +425,14 @@ protected DatanodeDetails chooseNodeFromNetworkTopology(
 Node pick = networkTopology.chooseRandom(
 anchor.getNetworkLocation(), excluded);
 DatanodeDetails pickedNode = (DatanodeDetails) pick;
+if (pickedNode == null) {
+  if (LOG.isDebugEnabled()) {
 
 Review comment:
   Again, I don't think we need the `if (LOG.isDebugEnabled()) {` wrapping the 
debug log.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline placement based on Topology does not have fallback

2020-03-16 Thread GitBox
sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline 
placement based on Topology does not have fallback
URL: https://github.com/apache/hadoop-ozone/pull/678#discussion_r393355398
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -292,10 +294,19 @@ DatanodeDetails fallBackPickNodes(
 int nodesToFind = nodesRequired - results.size();
 for (int x = 0; x < nodesToFind; x++) {
   // Pick remaining nodes based on the existence of rack awareness.
-  DatanodeDetails pick = rackAwareness
-  ? chooseNodeFromNetworkTopology(
-  nodeManager.getClusterNetworkTopologyMap(), anchor, exclude)
-  : fallBackPickNodes(healthyNodes, exclude);
+  DatanodeDetails pick;
+  try {
+pick = rackAwareness
+? chooseNodeFromNetworkTopology(
+nodeManager.getClusterNetworkTopologyMap(), anchor, exclude)
+: fallBackPickNodes(healthyNodes, exclude);
+  } catch (SCMException e) {
+pick = fallBackPickNodes(healthyNodes, exclude);
+if (LOG.isDebugEnabled()) {
 
 Review comment:
   I think we can remove this if statement safely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline placement based on Topology does not have fallback

2020-03-16 Thread GitBox
sodonnel commented on a change in pull request #678: HDDS-3179 Pipeline 
placement based on Topology does not have fallback
URL: https://github.com/apache/hadoop-ozone/pull/678#discussion_r393355167
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -99,9 +99,11 @@ boolean meetCriteria(DatanodeDetails datanodeDetails, int 
nodesRequired) {
   try {
 pipeline = stateManager.getPipeline(pid);
   } catch (PipelineNotFoundException e) {
-LOG.error("Pipeline not found in pipeline state manager during" +
-" pipeline creation. PipelineID: " + pid +
-" exception: " + e.getMessage());
+if (LOG.isDebugEnabled()) {
 
 Review comment:
   I don't think we need to wrap the debug logs in `if (LOG.isDebugEnabled())` 
if we use the SLF4J style logs with {} placeholders rather than string 
interpolation. The log message will only be evaluated if the debug level is set.
   
   For this log, I would suggest using the {} placeholders and let the full 
stack trace be logged rather than just `e.getMessage()`, eg:
   
   ```
   LOG.debug("Pipeline not found in pipeline state manager during" +
 " pipeline creation. PipelineID: {}", pid, e);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3180) Datanode fails to start due to confused inconsistent volume state

2020-03-16 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-3180:
-
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the contribution. I've merged the PR to master. 

> Datanode fails to start due to confused inconsistent volume state
> -
>
> Key: HDDS-3180
> URL: https://issues.apache.org/jira/browse/HDDS-3180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I meet an error in my testing ozone cluster when I restart datanode. From the 
> log, it throws inconsistent volume state but without other detailed helpful 
> info:
> {noformat}
> 2020-03-14 02:31:46,204 [main] INFO  (LogAdapter.java:51) - registered 
> UNIX signal handlers for [TERM, HUP, INT]
> 2020-03-14 02:31:46,736 [main] INFO  (HddsDatanodeService.java:204) - 
> HddsDatanodeService host:lyq-xx.xx.xx.xx ip:xx.xx.xx.xx
> 2020-03-14 02:31:46,784 [main] INFO  (HddsVolume.java:177) - Creating 
> Volume: /tmp/hadoop-hdfs/dfs/data/hdds of storage type : DISK and capacity : 
> 20063645696
> 2020-03-14 02:31:46,786 [main] ERROR (MutableVolumeSet.java:202) - Failed 
> to parse the storage location: file:///tmp/hadoop-hdfs/dfs/data
> java.io.IOException: Volume is in an INCONSISTENT state. Skipped loading 
> volume: /tmp/hadoop-hdfs/dfs/data/hdds
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.initialize(HddsVolume.java:226)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.(HddsVolume.java:180)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.(HddsVolume.java:71)
> at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume$Builder.build(HddsVolume.java:158)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.createVolume(MutableVolumeSet.java:336)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.initializeVolumeSet(MutableVolumeSet.java:183)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.(MutableVolumeSet.java:139)
> at 
> org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet.(MutableVolumeSet.java:111)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:97)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:128)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:235)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:179)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:154)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.call(HddsDatanodeService.java:78)
> at picocli.CommandLine.execute(CommandLine.java:1173)
> at picocli.CommandLine.access$800(CommandLine.java:141)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
> at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
> at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
> at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
> at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
> at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
> at 
> org.apache.hadoop.ozone.HddsDatanodeService.main(HddsDatanodeService.java:137)
> 2020-03-14 02:31:46,795 [shutdown-hook-0] INFO  (LogAdapter.java:51) - 
> SHUTDOWN_MSG:
> {noformat}
> Then I look into the code and the root cause is that the version file was 
> lost in that node.
> We need to log key message as well to help user quickly know the root cause 
> of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #679: HDDS-3180. Datanode fails to start due to confused inconsistent volum…

2020-03-16 Thread GitBox
xiaoyuyao merged pull request #679: HDDS-3180. Datanode fails to start due to 
confused inconsistent volum…
URL: https://github.com/apache/hadoop-ozone/pull/679
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3177) Periodic dependency update (Java)

2020-03-16 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-3177:
-
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~adoroszlai] for the contribution. The PR has been merged. 

> Periodic dependency update (Java)
> -
>
> Key: HDDS-3177
> URL: https://issues.apache.org/jira/browse/HDDS-3177
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
> Attachments: dependency-check-report.html
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Must:
> jackson-databind2.9.9 --> 2.10.3
> netty-all 4.0.52 --> 4.1.46
> nimbus-jose-jwt 4.41.1 --> 7.9 (or remove it?)
> Nice to have:
> cdi-api 1.2 --> 2.0.SP1 (major version change)
> hadoop 3.2.0 --> 3.2.1
> ===
> protobuf 2.5.0 --> ? this is more controversial 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3176) Remove unused dependency version strings

2020-03-16 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-3176.
--
Fix Version/s: 0.6.0
   Resolution: Fixed

This has been fixed as part of HDDS-3177. 

> Remove unused dependency version strings
> 
>
> Key: HDDS-3176
> URL: https://issues.apache.org/jira/browse/HDDS-3176
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: newbie
> Fix For: 0.6.0
>
>
> After the repo was split from hadoop, there are a few unused 
> dependencies/version strings left in pom.xml. They can be removed.
> Example: 
> {code}
> 1.2.6
> 2.0.0-beta-1
> {code}
> There may be more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393348483
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -871,6 +871,19 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
  startKeyName, keyPrefix, maxKeys);
   }
 
+  @Override
+  public boolean recoverTrash(String volumeName, String bucketName,
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393340771
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMTrashRecoverRequest.java
 ##
 @@ -0,0 +1,138 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.key;
+
+import java.io.IOException;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.response.key.OMTrashRecoverResponse;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.RecoverTrashRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles RecoverTrash request.
+ */
+public class OMTrashRecoverRequest extends OMKeyRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMTrashRecoverRequest.class);
+
+  public OMTrashRecoverRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) {
+RecoverTrashRequest recoverTrashRequest = getOmRequest()
+.getRecoverTrashRequest();
+Preconditions.checkNotNull(recoverTrashRequest);
+
+return getOmRequest().toBuilder().build();
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long transactionLogIndex,
+  OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper) {
+RecoverTrashRequest recoverTrashRequest = getOmRequest()
+.getRecoverTrashRequest();
+Preconditions.checkNotNull(recoverTrashRequest);
+
+String volumeName = recoverTrashRequest.getVolumeName();
+String bucketName = recoverTrashRequest.getBucketName();
+String keyName = recoverTrashRequest.getKeyName();
+String destinationBucket = recoverTrashRequest.getDestinationBucket();
+
+/** TODO: HDDS-2818. New Metrics for Trash Key Recover and Fails.
+ *  OMMetrics omMetrics = ozoneManager.getMetrics();
+ */
+
+OMResponse.Builder omResponse = OMResponse.newBuilder()
+.setCmdType(Type.RecoverTrash).setStatus(Status.OK)
+.setSuccess(true);
+
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean acquireLock = false;
+OMClientResponse omClientResponse = null;
+try {
+
+  // check Acl
+  checkKeyAcls(ozoneManager, volumeName, destinationBucket, keyName,
 
 Review comment:
   Yes, thanks for this comment.
   I think we should check bucket here. (For the deleted key not found.)
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #680: HDDS-3177. Periodic dependency update (Java)

2020-03-16 Thread GitBox
adoroszlai merged pull request #680: HDDS-3177. Periodic dependency update 
(Java)
URL: https://github.com/apache/hadoop-ozone/pull/680
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #680: HDDS-3177. Periodic dependency update (Java)

2020-03-16 Thread GitBox
adoroszlai commented on issue #680: HDDS-3177. Periodic dependency update (Java)
URL: https://github.com/apache/hadoop-ozone/pull/680#issuecomment-599782451
 
 
   Thanks @elek, @jojochuang and @xiaoyuyao for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #680: HDDS-3177. Periodic dependency update (Java)

2020-03-16 Thread GitBox
adoroszlai commented on a change in pull request #680: HDDS-3177. Periodic 
dependency update (Java)
URL: https://github.com/apache/hadoop-ozone/pull/680#discussion_r393338163
 
 

 ##
 File path: hadoop-hdds/tools/pom.xml
 ##
 @@ -54,6 +54,12 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   ratis-tools
   org.apache.ratis
   ${ratis.version}
+  
 
 Review comment:
   We can remove it later, but I don't think it's strictly necessary.  Ozone 
builds fine using locally built Ratis snapshot with RATIS-827 applied.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #680: HDDS-3177. Periodic dependency update (Java)

2020-03-16 Thread GitBox
xiaoyuyao commented on a change in pull request #680: HDDS-3177. Periodic 
dependency update (Java)
URL: https://github.com/apache/hadoop-ozone/pull/680#discussion_r393335812
 
 

 ##
 File path: pom.xml
 ##
 @@ -1791,11 +1708,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
   
   ${maven-surefire-plugin.argLine} @{argLine}
   
-${hadoop.common.build.dir}
-
-${hadoop.common.build.dir}
-
${env.LD_LIBRARY_PATH}:${project.build.directory}/native/target/usr/local/lib:${hadoop.common.build.dir}/native/target/usr/local/lib
 
 Review comment:
   The only native library I remembered for ozone is via ratis-thirdparty jar. 
So I'm OK with removing it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #680: HDDS-3177. Periodic dependency update (Java)

2020-03-16 Thread GitBox
xiaoyuyao commented on a change in pull request #680: HDDS-3177. Periodic 
dependency update (Java)
URL: https://github.com/apache/hadoop-ozone/pull/680#discussion_r393335156
 
 

 ##
 File path: hadoop-hdds/tools/pom.xml
 ##
 @@ -54,6 +54,12 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   ratis-tools
   org.apache.ratis
   ${ratis.version}
+  
 
 Review comment:
   Make sense to me. I've +1 on RATIS-827. We can backoff this change when 
updating the ratis-snapshot version. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r39620
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMTrashRecoverResponse.java
 ##
 @@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+/**
+ * Response for RecoverTrash request.
+ */
+public class OMTrashRecoverResponse extends OMClientResponse {
+  private OmKeyInfo omKeyInfo;
+
+  public OMTrashRecoverResponse(@Nullable OmKeyInfo omKeyInfo,
 
 Review comment:
   Yes, I will implement it in later patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393324717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
 ##
 @@ -169,6 +169,18 @@ OmKeyLocationInfo allocateBlock(OmKeyArgs args, long 
clientID,
   List listTrash(String volumeName, String bucketName,
   String startKeyName, String keyPrefix, int maxKeys) throws IOException;
 
+  /**
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060483#comment-17060483
 ] 

Siyao Meng edited comment on HDDS-3056 at 3/16/20, 9:43 PM:


Thanks [~cxorm] for the prompt response.
I'd say HDDS-3056 would supercede HDDS-2385, for now. But we do plan to 
implement more sophisticated permission control later (e.g. make a config 
switch to show/hide volumes with no permission - to be discussed)


was (Author: smeng):
Thanks [~cxorm] for the prompt response.
I'd say HDDS-3056 would supercede HDDS-2385, for now. But we do plan to 
implement more sophisticated permission control later (e.g. make a config 
switch to show/hide volumes with no permission)

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060483#comment-17060483
 ] 

Siyao Meng commented on HDDS-3056:
--

Thanks [~cxorm] for the prompt response.
I'd say HDDS-3056 would supercede HDDS-2385, for now. But we do plan to 
implement more sophisticated permission control later (e.g. make a config 
switch to show/hide volumes with no permission)

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393324661
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -871,6 +871,19 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
  startKeyName, keyPrefix, maxKeys);
   }
 
+  @Override
+  public boolean recoverTrash(String volumeName, String bucketName,
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393324661
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -871,6 +871,19 @@ private boolean isKeyEmpty(OmKeyInfo keyInfo) {
  startKeyName, keyPrefix, maxKeys);
   }
 
+  @Override
+  public boolean recoverTrash(String volumeName, String bucketName,
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393324717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
 ##
 @@ -169,6 +169,18 @@ OmKeyLocationInfo allocateBlock(OmKeyArgs args, long 
clientID,
   List listTrash(String volumeName, String bucketName,
   String startKeyName, String keyPrefix, int maxKeys) throws IOException;
 
+  /**
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-03-16 Thread GitBox
cxorm commented on a change in pull request #399: HDDS-2424. Add the 
recover-trash command server side handling.
URL: https://github.com/apache/hadoop-ozone/pull/399#discussion_r393324509
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2237,13 +2237,22 @@ public void deleteKey(OmKeyArgs args) throws 
IOException {
 }
   }
 
-  // TODO: HDDS-2424. recover-trash command server side handling.
   @Override
   public boolean recoverTrash(String volumeName, String bucketName,
   String keyName, String destinationBucket) throws IOException {
 
-boolean recoverOperation = true;
-return recoverOperation;
+if (isAclEnabled) {
 
 Review comment:
   Thanks for the review.
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3170) Fix issues in File count by size task.

2020-03-16 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3170:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you for the reviews [~swagle], [~vivekratnavel], [~adoroszlai]. I have 
merged the PR. 

> Fix issues in File count by size task.
> --
>
> Key: HDDS-3170
> URL: https://issues.apache.org/jira/browse/HDDS-3170
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> * Handle DELETE key operation correctly.
> * Handle PUT key operation for an existing key.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3086) Failure running integration test it-freon

2020-03-16 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060478#comment-17060478
 ] 

Attila Doroszlai commented on HDDS-3086:


Unfortunately it looks like the fix was not complete, as we [had a 
timeout|https://github.com/apache/hadoop-ozone/runs/511884583] in it-freon 
since the PR was merged.

> Failure running integration test it-freon 
> --
>
> Key: HDDS-3086
> URL: https://issues.apache.org/jira/browse/HDDS-3086
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: freon
>Reporter: Supratim Deka
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
> Attachments: debug_output.zip, 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp-output.txt, 
> org.apache.hadoop.ozone.freon.TestDataValidateWithDummyContainers-output.txt, 
> org.apache.hadoop.ozone.freon.TestRandomKeyGenerator-output.txt, 
> org.apache.hadoop.ozone.freon.TestRandomKeyGenerator.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Observed a time-out during pr-check/it-freon for HDDS-2940. Failure appears 
> unrelated to the changes in the patch. 
> [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.193 
> s - in org.apache.hadoop.ozone.freon.TestDataValidateWithUnsafeByteOperations
> 2862
> [INFO] Running org.apache.hadoop.ozone.freon.TestFreonWithDatanodeRestart
> 2863
> [WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 
> 30.559 s - in org.apache.hadoop.ozone.freon.TestFreonWithDatanodeRestart
> 2864
> [INFO] 
> 2865
> [INFO] Results:
> 2866
> [INFO] 
> 2867
> [WARNING] Tests run: 16, Failures: 0, Errors: 0, Skipped: 3
> 2868
> [INFO] 
> 2869
> [INFO] 
> 
> 2870
> [INFO] BUILD FAILURE
> 2871
> [INFO] 
> 
> 2872
> [INFO] Total time:  28:58 min
> 2873
> [INFO] Finished at: 2020-02-26T17:55:42Z
> 2874
> [INFO] 
> 
> 2875
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) 
> on project hadoop-ozone-integration-test: There was a timeout or other error 
> in the fork -> [Help 1]
> 2876
> [ERROR] 
> 2877
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> 2878
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> 2879
> [ERROR] 
> 2880
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> 2881
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3220) Filesystem client should not retry on AccessControlException

2020-03-16 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HDDS-3220:
---

 Summary: Filesystem client should not retry on 
AccessControlException
 Key: HDDS-3220
 URL: https://issues.apache.org/jira/browse/HDDS-3220
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Arpit Agarwal


When running an ofs/o3fs operation without kerberos credentials, the client 
seems to go in a tight retry loop, before eventually giving up.

In this case no retry should be necessary.

Also the retry policy may need another look, there should be some wait between 
the retries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #675: HDDS-3170. Fix issues in File count by size task.

2020-03-16 Thread GitBox
avijayanhwx merged pull request #675: HDDS-3170. Fix issues in File count by 
size task.
URL: https://github.com/apache/hadoop-ozone/pull/675
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #675: HDDS-3170. Fix issues in File count by size task.

2020-03-16 Thread GitBox
avijayanhwx commented on issue #675: HDDS-3170. Fix issues in File count by 
size task.
URL: https://github.com/apache/hadoop-ozone/pull/675#issuecomment-599767574
 
 
   Thank you for the reviews @swagle , @vivekratnavel & @adoroszlai. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx merged pull request #681: HDDS-3153. Create REST API to serve Recon 
Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3153) Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-3153:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you for this patch [~vivekratnavel]. I have merged the PR.

> Create REST API to serve Recon Dashboard and integrate with UI in Recon.
> 
>
> Key: HDDS-3153
> URL: https://issues.apache.org/jira/browse/HDDS-3153
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2020-03-10 at 12.10.41 PM.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add a REST API to serve information required for recon dashboard
> !Screen Shot 2020-03-10 at 12.10.41 PM.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on issue #681: HDDS-3153. Create REST API to serve Recon 
Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#issuecomment-599763494
 
 
   Thank you for this patch @vivekratnavel. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
vivekratnavel commented on issue #681: HDDS-3153. Create REST API to serve 
Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#issuecomment-599755180
 
 
   The failing test is not related to this patch and fails due to timeout issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060458#comment-17060458
 ] 

YiSheng Lien edited comment on HDDS-3056 at 3/16/20, 8:41 PM:
--

Hi [~smeng], thanks for reminding this jira, I have reassigned it.
And a question, if this issue is fixed, would we fix 
[HDDS-2385|https://issues.apache.org/jira/browse/HDDS-2385] ?


was (Author: cxorm):
Hi [~smeng], thanks for reminding this jira, I have reassigned it.

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060458#comment-17060458
 ] 

YiSheng Lien commented on HDDS-3056:


Hi [~smeng], thanks for reminding this jira, I have reassigned it.

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-3056:
--

Assignee: Siyao Meng  (was: YiSheng Lien)

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3056:
-
Description: 
Current implementation of `listStatus` only returns the volumes the user 
creates.
And there's no existing OM public API to return a list of users or return all 
volumes. Which means we must add new APIs to OM to either return user list or 
all volumes in order for this feature to work.

-We can open another jira on master branch to add those APIs, get back to this 
jira and add this function to OFS.-
After a discussion with [~arpaga], Sanjay suggested we should allow *all* users 
to list *all* volumes for now. (Users still won't be able to access volumes 
which they don't have permission to.)

  was:
Current implementation of `listStatus` only returns the volumes the user 
creates. And there's no existing OM public API to return a list of users or 
return all volumes. Which means we must add new APIs to OM to either return 
user list or all volumes in order for this feature to work.

We can open another jira on master to add those APIs, get back to this jira and 
add this function to OFS.


> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: YiSheng Lien
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates.
> And there's no existing OM public API to return a list of users or return all 
> volumes. Which means we must add new APIs to OM to either return user list or 
> all volumes in order for this feature to work.
> -We can open another jira on master branch to add those APIs, get back to 
> this jira and add this function to OFS.-
> After a discussion with [~arpaga], Sanjay suggested we should allow *all* 
> users to list *all* volumes for now. (Users still won't be able to access 
> volumes which they don't have permission to.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #676: HDDS-1995. Generate renewTime on OMLeader for GetDelegationToken

2020-03-16 Thread GitBox
hanishakoneru commented on issue #676: HDDS-1995. Generate renewTime on 
OMLeader for GetDelegationToken
URL: https://github.com/apache/hadoop-ozone/pull/676#issuecomment-599736814
 
 
   > One more issue is for a token generated, it will use leader token renewal 
time. But if 3 OM's have different renewal time, so token generated by OM's, 
whenever leader changes will have a different expiry time for different tokens.
   Yes, but how do you propose to fix this? If the OMs have different renewal 
times, we cannot pick one over the other.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3056) Allow users to list all volumes with listStatus

2020-03-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-3056:

Summary: Allow users to list all volumes with listStatus  (was: Allow 
admins to list all volumes with listStatus)

> Allow users to list all volumes with listStatus
> ---
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: YiSheng Lien
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates. And there's no existing OM public API to return a list of users or 
> return all volumes. Which means we must add new APIs to OM to either return 
> user list or all volumes in order for this feature to work.
> We can open another jira on master to add those APIs, get back to this jira 
> and add this function to OFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3219) Allow users to list all volumes

2020-03-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-3219.
-
Resolution: Duplicate

> Allow users to list all volumes
> ---
>
> Key: HDDS-3219
> URL: https://issues.apache.org/jira/browse/HDDS-3219
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Arpit Agarwal
>Assignee: Siyao Meng
>Priority: Major
>
> Users should be able to see the complete list of volumes in the system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3056) Allow admins to list all volumes with listStatus

2020-03-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-3056:

Summary: Allow admins to list all volumes with listStatus  (was: Implement 
ofs://: Allow admins to list all volumes with listStatus)

> Allow admins to list all volumes with listStatus
> 
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: YiSheng Lien
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates. And there's no existing OM public API to return a list of users or 
> return all volumes. Which means we must add new APIs to OM to either return 
> user list or all volumes in order for this feature to work.
> We can open another jira on master to add those APIs, get back to this jira 
> and add this function to OFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3056) Allow admins to list all volumes with listStatus

2020-03-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-3056:

Parent: (was: HDDS-2665)
Issue Type: Improvement  (was: Sub-task)

> Allow admins to list all volumes with listStatus
> 
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: YiSheng Lien
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates. And there's no existing OM public API to return a list of users or 
> return all volumes. Which means we must add new APIs to OM to either return 
> user list or all volumes in order for this feature to work.
> We can open another jira on master to add those APIs, get back to this jira 
> and add this function to OFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3056) Allow admins to list all volumes with listStatus

2020-03-16 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17060445#comment-17060445
 ] 

Siyao Meng commented on HDDS-3056:
--

Hi [~cxorm] thanks for taking up this jira. Are you working on this right now?
If not I can take this, we need to fix this ASAP.

> Allow admins to list all volumes with listStatus
> 
>
> Key: HDDS-3056
> URL: https://issues.apache.org/jira/browse/HDDS-3056
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: YiSheng Lien
>Priority: Major
>
> Current implementation of `listStatus` only returns the volumes the user 
> creates. And there's no existing OM public API to return a list of users or 
> return all volumes. Which means we must add new APIs to OM to either return 
> user list or all volumes in order for this feature to work.
> We can open another jira on master to add those APIs, get back to this jira 
> and add this function to OFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3219) Allow users to list all volumes

2020-03-16 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HDDS-3219:
---

 Summary: Allow users to list all volumes
 Key: HDDS-3219
 URL: https://issues.apache.org/jira/browse/HDDS-3219
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Arpit Agarwal
Assignee: Siyao Meng


Users should be able to see the complete list of volumes in the system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
vivekratnavel commented on issue #688: HDDS-2995. Add integration test for 
Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#issuecomment-599726356
 
 
   +1 LGTM pending CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #688: HDDS-2995. Add 
integration test for Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#discussion_r393244582
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconAsPassiveScm.java
 ##
 @@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.recon;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.ONE;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.RATIS;
+import static 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.runTestOzoneContainerViaDataNode;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Recon's passive SCM integration tests.
+ */
+public class TestReconAsPassiveScm {
+
+  private MiniOzoneCluster cluster = null;
+  private OzoneConfiguration conf;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+conf.set(HDDS_CONTAINER_REPORT_INTERVAL, "5s");
+conf.set(HDDS_PIPELINE_REPORT_INTERVAL, "5s");
+cluster =  MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3)
+.includeRecon(true).build();
+cluster.waitForClusterToBeReady();
+  }
+
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test(timeout = 12)
+  public void testDatanodeRegistrationAndReports() throws Exception {
+ReconStorageContainerManagerFacade reconScm =
+(ReconStorageContainerManagerFacade)
+cluster.getReconServer().getReconStorageContainerManager();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+PipelineManager reconPipelineManager = reconScm.getPipelineManager();
+PipelineManager scmPipelineManager = scm.getPipelineManager();
+
+LambdaTestUtils.await(6, 5000,
 
 Review comment:
   Since the pipeline & container report interval is 5s, making the check 
interval lesser than 5 seconds may not help us. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #688: HDDS-2995. Add 
integration test for Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#discussion_r393244098
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconAsPassiveScm.java
 ##
 @@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.recon;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.ONE;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.RATIS;
+import static 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.runTestOzoneContainerViaDataNode;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Recon's passive SCM integration tests.
+ */
+public class TestReconAsPassiveScm {
+
+  private MiniOzoneCluster cluster = null;
+  private OzoneConfiguration conf;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+conf.set(HDDS_CONTAINER_REPORT_INTERVAL, "5s");
+conf.set(HDDS_PIPELINE_REPORT_INTERVAL, "5s");
+cluster =  MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3)
+.includeRecon(true).build();
+cluster.waitForClusterToBeReady();
+  }
+
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test(timeout = 12)
+  public void testDatanodeRegistrationAndReports() throws Exception {
+ReconStorageContainerManagerFacade reconScm =
+(ReconStorageContainerManagerFacade)
+cluster.getReconServer().getReconStorageContainerManager();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+PipelineManager reconPipelineManager = reconScm.getPipelineManager();
+PipelineManager scmPipelineManager = scm.getPipelineManager();
+
+LambdaTestUtils.await(6, 5000,
+() -> (reconPipelineManager.getPipelines().size() == 4));
+
+// Verify if Recon has all the pipelines from SCM.
+scmPipelineManager.getPipelines().forEach(p -> {
+  try {
+assertNotNull(reconPipelineManager.getPipeline(p.getId()));
+  } catch (PipelineNotFoundException e) {
+Assert.fail();
+  }
+});
+
+// Verify we can never create a pipeline in Recon.
+LambdaTestUtils.intercept(UnsupportedOperationException.class,
+"Trying to create pipeline in Recon, which is prohibited!",
+() -> reconPipelineManager.createPipeline(RATIS, ONE));
+
+ContainerManager scmContainerManager = scm.getContainerManager();
+assertTrue(scmContainerManager.getContainerIDs().isEmpty());
+
+// Verify if Recon regix  stered all the nodes.
+NodeManager reconNodeManager = 

[jira] [Created] (HDDS-3218) Apply timeout defaults for integration tests

2020-03-16 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-3218:
--

 Summary: Apply timeout defaults for integration tests
 Key: HDDS-3218
 URL: https://issues.apache.org/jira/browse/HDDS-3218
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Several integration tests were fixed recently by setting Ratis-related 
timeouts.  The goal of this task is to

# get rid of duplication introduced by repeating timeout in each test
# potentially fix some remaining flaky tests

by applying the timeouts as defaults in MiniOzoneCluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #681: HDDS-3153. Create 
REST API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393242827
 
 

 ##
 File path: 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
 ##
 @@ -305,4 +419,39 @@ public void testGetPipelines() throws Exception {
   return (pipelineMetadata1.getContainers() == 1);
 });
   }
+
+  @Test
+  public void testGetClusterState() throws Exception {
+Response response = clusterStateEndpoint.getClusterState();
+ClusterStateResponse clusterStateResponse =
+(ClusterStateResponse) response.getEntity();
+
+Assert.assertEquals(1, clusterStateResponse.getPipelines());
+Assert.assertEquals(2, clusterStateResponse.getVolumes());
+Assert.assertEquals(2, clusterStateResponse.getBuckets());
+Assert.assertEquals(3, clusterStateResponse.getKeys());
+Assert.assertEquals(2, clusterStateResponse.getTotalDatanodes());
+Assert.assertEquals(2, clusterStateResponse.getHealthyDatanodes());
+
+DatanodeStorageReport storageReport =
+clusterStateResponse.getStorageReport();
+
+
+// if container report is processed first, and pipeline does not exist
+// then container is not added until the next container report is processed
+SCMHeartbeatRequestProto heartbeatRequestProto =
+SCMHeartbeatRequestProto.newBuilder()
+.setContainerReport(containerReportsProto)
+.setDatanodeDetails(datanodeDetailsProto)
+.build();
+reconScm.getDatanodeProtocolServer()
+.sendHeartbeat(heartbeatRequestProto);
+
+LambdaTestUtils.await(3, 2000, () -> {
 
 Review comment:
   Refactored as per the suggestion in the latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #688: HDDS-2995. Add 
integration test for Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#discussion_r393237942
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconAsPassiveScm.java
 ##
 @@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.recon;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.ONE;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.RATIS;
+import static 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.runTestOzoneContainerViaDataNode;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Recon's passive SCM integration tests.
+ */
+public class TestReconAsPassiveScm {
+
+  private MiniOzoneCluster cluster = null;
+  private OzoneConfiguration conf;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+conf.set(HDDS_CONTAINER_REPORT_INTERVAL, "5s");
+conf.set(HDDS_PIPELINE_REPORT_INTERVAL, "5s");
+cluster =  MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3)
+.includeRecon(true).build();
+cluster.waitForClusterToBeReady();
+  }
+
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test(timeout = 12)
+  public void testDatanodeRegistrationAndReports() throws Exception {
+ReconStorageContainerManagerFacade reconScm =
+(ReconStorageContainerManagerFacade)
+cluster.getReconServer().getReconStorageContainerManager();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+PipelineManager reconPipelineManager = reconScm.getPipelineManager();
+PipelineManager scmPipelineManager = scm.getPipelineManager();
+
+LambdaTestUtils.await(6, 5000,
 
 Review comment:
   Can we reduce the interval to 2s instead of 5s? This will help reduce 
overall runtime of the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #688: HDDS-2995. Add 
integration test for Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#discussion_r393236373
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconAsPassiveScm.java
 ##
 @@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.recon;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.ONE;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.RATIS;
+import static 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.runTestOzoneContainerViaDataNode;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Recon's passive SCM integration tests.
+ */
+public class TestReconAsPassiveScm {
+
+  private MiniOzoneCluster cluster = null;
+  private OzoneConfiguration conf;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+conf.set(HDDS_CONTAINER_REPORT_INTERVAL, "5s");
+conf.set(HDDS_PIPELINE_REPORT_INTERVAL, "5s");
+cluster =  MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3)
+.includeRecon(true).build();
+cluster.waitForClusterToBeReady();
+  }
+
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test(timeout = 12)
+  public void testDatanodeRegistrationAndReports() throws Exception {
+ReconStorageContainerManagerFacade reconScm =
+(ReconStorageContainerManagerFacade)
+cluster.getReconServer().getReconStorageContainerManager();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+PipelineManager reconPipelineManager = reconScm.getPipelineManager();
+PipelineManager scmPipelineManager = scm.getPipelineManager();
+
+LambdaTestUtils.await(6, 5000,
+() -> (reconPipelineManager.getPipelines().size() == 4));
+
+// Verify if Recon has all the pipelines from SCM.
+scmPipelineManager.getPipelines().forEach(p -> {
+  try {
+assertNotNull(reconPipelineManager.getPipeline(p.getId()));
+  } catch (PipelineNotFoundException e) {
+Assert.fail();
+  }
+});
+
+// Verify we can never create a pipeline in Recon.
+LambdaTestUtils.intercept(UnsupportedOperationException.class,
+"Trying to create pipeline in Recon, which is prohibited!",
+() -> reconPipelineManager.createPipeline(RATIS, ONE));
+
+ContainerManager scmContainerManager = scm.getContainerManager();
+assertTrue(scmContainerManager.getContainerIDs().isEmpty());
+
+// Verify if Recon regix  stered all the nodes.
 
 Review comment:
   Nit: Fix typo


[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #688: HDDS-2995. Add 
integration test for Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#discussion_r393241361
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconAsPassiveScm.java
 ##
 @@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.recon;
+
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.ONE;
+import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.RATIS;
+import static 
org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainer.runTestOzoneContainerViaDataNode;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Optional;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Recon's passive SCM integration tests.
+ */
+public class TestReconAsPassiveScm {
+
+  private MiniOzoneCluster cluster = null;
+  private OzoneConfiguration conf;
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+
+  @Before
+  public void init() throws Exception {
+conf = new OzoneConfiguration();
+conf.set(HDDS_CONTAINER_REPORT_INTERVAL, "5s");
+conf.set(HDDS_PIPELINE_REPORT_INTERVAL, "5s");
+cluster =  MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3)
+.includeRecon(true).build();
+cluster.waitForClusterToBeReady();
+  }
+
+  @After
+  public void shutdown() {
+if (cluster != null) {
+  cluster.shutdown();
+}
+  }
+
+  @Test(timeout = 12)
+  public void testDatanodeRegistrationAndReports() throws Exception {
+ReconStorageContainerManagerFacade reconScm =
+(ReconStorageContainerManagerFacade)
+cluster.getReconServer().getReconStorageContainerManager();
+StorageContainerManager scm = cluster.getStorageContainerManager();
+PipelineManager reconPipelineManager = reconScm.getPipelineManager();
+PipelineManager scmPipelineManager = scm.getPipelineManager();
+
+LambdaTestUtils.await(6, 5000,
+() -> (reconPipelineManager.getPipelines().size() == 4));
+
+// Verify if Recon has all the pipelines from SCM.
+scmPipelineManager.getPipelines().forEach(p -> {
+  try {
+assertNotNull(reconPipelineManager.getPipeline(p.getId()));
+  } catch (PipelineNotFoundException e) {
+Assert.fail();
+  }
+});
+
+// Verify we can never create a pipeline in Recon.
+LambdaTestUtils.intercept(UnsupportedOperationException.class,
+"Trying to create pipeline in Recon, which is prohibited!",
+() -> reconPipelineManager.createPipeline(RATIS, ONE));
+
+ContainerManager scmContainerManager = scm.getContainerManager();
+assertTrue(scmContainerManager.getContainerIDs().isEmpty());
+
+// Verify if Recon regix  stered all the nodes.
+NodeManager reconNodeManager = 

[GitHub] [hadoop-ozone] avijayanhwx commented on issue #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on issue #681: HDDS-3153. Create REST API to serve Recon 
Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#issuecomment-599695913
 
 
   LGTM +1 pending CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #688: HDDS-2995. Add integration test for Recon's Passive SCM state.

2020-03-16 Thread GitBox
avijayanhwx commented on issue #688: HDDS-2995. Add integration test for 
Recon's Passive SCM state.
URL: https://github.com/apache/hadoop-ozone/pull/688#issuecomment-599694631
 
 
   Passed the 20 Test run - 
https://github.com/avijayanhwx/hadoop-ozone/runs/511649415. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST 
API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393208805
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.ozone.recon.api.types.ClusterStateResponse;
+import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.apache.hadoop.ozone.recon.scm.ReconContainerManager;
+import org.apache.hadoop.ozone.recon.scm.ReconNodeManager;
+import org.apache.hadoop.ozone.recon.scm.ReconPipelineManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.inject.Inject;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import java.util.List;
+
+/**
+ * Endpoint to fetch current state of ozone cluster.
+ */
+@Path("/clusterState")
+@Produces(MediaType.APPLICATION_JSON)
+public class ClusterStateEndpoint {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ClusterStateEndpoint.class);
+
+  private ReconNodeManager nodeManager;
+  private ReconPipelineManager pipelineManager;
+  private ReconContainerManager containerManager;
+  private ReconOMMetadataManager omMetadataManager;
+
+  @Inject
+  ClusterStateEndpoint(OzoneStorageContainerManager reconSCM,
+   ReconOMMetadataManager omMetadataManager) {
+this.nodeManager =
+(ReconNodeManager) reconSCM.getScmNodeManager();
+this.pipelineManager = (ReconPipelineManager) 
reconSCM.getPipelineManager();
+this.containerManager =
+(ReconContainerManager) reconSCM.getContainerManager();
+this.omMetadataManager = omMetadataManager;
+  }
+
+  /**
+   * Return a summary report on current cluster state.
+   * @return {@link Response}
+   */
+  @GET
+  public Response getClusterState() {
+List datanodeDetails = nodeManager.getAllNodes();
+int containers = this.containerManager.getContainerIDs().size();
+int pipelines = this.pipelineManager.getPipelines().size();
+int healthyDatanodes = nodeManager.getNodeCount(NodeState.HEALTHY);
+SCMNodeStat stats = nodeManager.getStats();
+DatanodeStorageReport storageReport =
 
 Review comment:
   Sorry, I referenced the wrong storageReport. There is an unused one in 
TestEndpoints#testGetClusterState.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3217) Datanode startup is slow due to iterating container DB 2-3 times

2020-03-16 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3217:


 Summary: Datanode startup is slow due to iterating container DB 
2-3 times
 Key: HDDS-3217
 URL: https://issues.apache.org/jira/browse/HDDS-3217
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


During Datanode startup, for each container we iterate 2 times entire DB
1. For Setting block length
2. For finding delete Key count.

And for open containers, we do step 1 again.

*Code Snippet:*
*ContainerReader.java:*

*For setting Bytes Used:*
{code:java}
  List> liveKeys = metadata.getStore()
  .getRangeKVs(null, Integer.MAX_VALUE,
  MetadataKeyFilters.getNormalKeyFilter());

  bytesUsed = liveKeys.parallelStream().mapToLong(e-> {
BlockData blockData;
try {
  blockData = BlockUtils.getBlockData(e.getValue());
  return blockData.getSize();
} catch (IOException ex) {
  return 0L;
}
  }).sum();
  kvContainerData.setBytesUsed(bytesUsed);
{code}

*For setting pending deleted Key count*

{code:java}
  MetadataKeyFilters.KeyPrefixFilter filter =
  new MetadataKeyFilters.KeyPrefixFilter()
  .addFilter(OzoneConsts.DELETING_KEY_PREFIX);
  int numPendingDeletionBlocks =
  containerDB.getStore().getSequentialRangeKVs(null,
  Integer.MAX_VALUE, filter)
  .size();
  kvContainerData.incrPendingDeletionBlocks(numPendingDeletionBlocks);
{code}

*For open Containers*

{code:java}
  if (kvContainer.getContainerState()
  == ContainerProtos.ContainerDataProto.State.OPEN) {
// commitSpace for Open Containers relies on usedBytes
initializeUsedBytes(kvContainer);
  }
{code}


*Jstack of DN during startup*
{code:java}
"Thread-8" #34 prio=5 os_prio=0 tid=0x7f5df507 nid=0x8ee runnable 
[0x7f4d840f3000]
   java.lang.Thread.State: RUNNABLE
at org.rocksdb.RocksIterator.next0(Native Method)
at org.rocksdb.AbstractRocksIterator.next(AbstractRocksIterator.java:70)
at 
org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:195)
at 
org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:155)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:158)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyAndFixupContainerData(ContainerReader.java:191)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyContainerFile(ContainerReader.java:168)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.readVolume(ContainerReader.java:146)
at 
org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.run(ContainerReader.java:101)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3217) Datanode startup is slow due to iterating container DB 2-3 times

2020-03-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3217:
-
Labels: billiontest  (was: )

> Datanode startup is slow due to iterating container DB 2-3 times
> 
>
> Key: HDDS-3217
> URL: https://issues.apache.org/jira/browse/HDDS-3217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: billiontest
>
> During Datanode startup, for each container we iterate 2 times entire DB
> 1. For Setting block length
> 2. For finding delete Key count.
> And for open containers, we do step 1 again.
> *Code Snippet:*
> *ContainerReader.java:*
> *For setting Bytes Used:*
> {code:java}
>   List> liveKeys = metadata.getStore()
>   .getRangeKVs(null, Integer.MAX_VALUE,
>   MetadataKeyFilters.getNormalKeyFilter());
>   bytesUsed = liveKeys.parallelStream().mapToLong(e-> {
> BlockData blockData;
> try {
>   blockData = BlockUtils.getBlockData(e.getValue());
>   return blockData.getSize();
> } catch (IOException ex) {
>   return 0L;
> }
>   }).sum();
>   kvContainerData.setBytesUsed(bytesUsed);
> {code}
> *For setting pending deleted Key count*
> {code:java}
>   MetadataKeyFilters.KeyPrefixFilter filter =
>   new MetadataKeyFilters.KeyPrefixFilter()
>   .addFilter(OzoneConsts.DELETING_KEY_PREFIX);
>   int numPendingDeletionBlocks =
>   containerDB.getStore().getSequentialRangeKVs(null,
>   Integer.MAX_VALUE, filter)
>   .size();
>   kvContainerData.incrPendingDeletionBlocks(numPendingDeletionBlocks);
> {code}
> *For open Containers*
> {code:java}
>   if (kvContainer.getContainerState()
>   == ContainerProtos.ContainerDataProto.State.OPEN) {
> // commitSpace for Open Containers relies on usedBytes
> initializeUsedBytes(kvContainer);
>   }
> {code}
> *Jstack of DN during startup*
> {code:java}
> "Thread-8" #34 prio=5 os_prio=0 tid=0x7f5df507 nid=0x8ee runnable 
> [0x7f4d840f3000]
>java.lang.Thread.State: RUNNABLE
> at org.rocksdb.RocksIterator.next0(Native Method)
> at 
> org.rocksdb.AbstractRocksIterator.next(AbstractRocksIterator.java:70)
> at 
> org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:195)
> at 
> org.apache.hadoop.hdds.utils.RocksDBStore.getRangeKVs(RocksDBStore.java:155)
> at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.parseKVContainerData(KeyValueContainerUtil.java:158)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyAndFixupContainerData(ContainerReader.java:191)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.verifyContainerFile(ContainerReader.java:168)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.readVolume(ContainerReader.java:146)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerReader.run(ContainerReader.java:101)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST 
API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393208221
 
 

 ##
 File path: 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
 ##
 @@ -305,4 +419,39 @@ public void testGetPipelines() throws Exception {
   return (pipelineMetadata1.getContainers() == 1);
 });
   }
+
+  @Test
+  public void testGetClusterState() throws Exception {
+Response response = clusterStateEndpoint.getClusterState();
+ClusterStateResponse clusterStateResponse =
+(ClusterStateResponse) response.getEntity();
+
+Assert.assertEquals(1, clusterStateResponse.getPipelines());
+Assert.assertEquals(2, clusterStateResponse.getVolumes());
+Assert.assertEquals(2, clusterStateResponse.getBuckets());
+Assert.assertEquals(3, clusterStateResponse.getKeys());
+Assert.assertEquals(2, clusterStateResponse.getTotalDatanodes());
+Assert.assertEquals(2, clusterStateResponse.getHealthyDatanodes());
+
+DatanodeStorageReport storageReport =
+clusterStateResponse.getStorageReport();
+
+
+// if container report is processed first, and pipeline does not exist
+// then container is not added until the next container report is processed
+SCMHeartbeatRequestProto heartbeatRequestProto =
+SCMHeartbeatRequestProto.newBuilder()
+.setContainerReport(containerReportsProto)
+.setDatanodeDetails(datanodeDetailsProto)
+.build();
+reconScm.getDatanodeProtocolServer()
+.sendHeartbeat(heartbeatRequestProto);
+
+LambdaTestUtils.await(3, 2000, () -> {
 
 Review comment:
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #681: HDDS-3153. Create 
REST API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393206515
 
 

 ##
 File path: 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
 ##
 @@ -305,4 +419,39 @@ public void testGetPipelines() throws Exception {
   return (pipelineMetadata1.getContainers() == 1);
 });
   }
+
+  @Test
+  public void testGetClusterState() throws Exception {
+Response response = clusterStateEndpoint.getClusterState();
+ClusterStateResponse clusterStateResponse =
+(ClusterStateResponse) response.getEntity();
+
+Assert.assertEquals(1, clusterStateResponse.getPipelines());
+Assert.assertEquals(2, clusterStateResponse.getVolumes());
+Assert.assertEquals(2, clusterStateResponse.getBuckets());
+Assert.assertEquals(3, clusterStateResponse.getKeys());
+Assert.assertEquals(2, clusterStateResponse.getTotalDatanodes());
+Assert.assertEquals(2, clusterStateResponse.getHealthyDatanodes());
+
+DatanodeStorageReport storageReport =
+clusterStateResponse.getStorageReport();
+
+
+// if container report is processed first, and pipeline does not exist
+// then container is not added until the next container report is processed
+SCMHeartbeatRequestProto heartbeatRequestProto =
+SCMHeartbeatRequestProto.newBuilder()
+.setContainerReport(containerReportsProto)
+.setDatanodeDetails(datanodeDetailsProto)
+.build();
+reconScm.getDatanodeProtocolServer()
+.sendHeartbeat(heartbeatRequestProto);
+
+LambdaTestUtils.await(3, 2000, () -> {
 
 Review comment:
   Agreed. I can take care of this refactor in another JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
vivekratnavel commented on a change in pull request #681: HDDS-3153. Create 
REST API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393205858
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.ozone.recon.api.types.ClusterStateResponse;
+import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.apache.hadoop.ozone.recon.scm.ReconContainerManager;
+import org.apache.hadoop.ozone.recon.scm.ReconNodeManager;
+import org.apache.hadoop.ozone.recon.scm.ReconPipelineManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.inject.Inject;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import java.util.List;
+
+/**
+ * Endpoint to fetch current state of ozone cluster.
+ */
+@Path("/clusterState")
+@Produces(MediaType.APPLICATION_JSON)
+public class ClusterStateEndpoint {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ClusterStateEndpoint.class);
+
+  private ReconNodeManager nodeManager;
+  private ReconPipelineManager pipelineManager;
+  private ReconContainerManager containerManager;
+  private ReconOMMetadataManager omMetadataManager;
+
+  @Inject
+  ClusterStateEndpoint(OzoneStorageContainerManager reconSCM,
+   ReconOMMetadataManager omMetadataManager) {
+this.nodeManager =
+(ReconNodeManager) reconSCM.getScmNodeManager();
+this.pipelineManager = (ReconPipelineManager) 
reconSCM.getPipelineManager();
+this.containerManager =
+(ReconContainerManager) reconSCM.getContainerManager();
+this.omMetadataManager = omMetadataManager;
+  }
+
+  /**
+   * Return a summary report on current cluster state.
+   * @return {@link Response}
+   */
+  @GET
+  public Response getClusterState() {
+List datanodeDetails = nodeManager.getAllNodes();
+int containers = this.containerManager.getContainerIDs().size();
+int pipelines = this.pipelineManager.getPipelines().size();
+int healthyDatanodes = nodeManager.getNodeCount(NodeState.HEALTHY);
+SCMNodeStat stats = nodeManager.getStats();
+DatanodeStorageReport storageReport =
 
 Review comment:
   `storageReport` is used in setStorageReport of ClusterStateResponse.Builder 
in L102.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST 
API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393187829
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
 ##
 @@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
+import org.apache.hadoop.ozone.recon.api.types.ClusterStateResponse;
+import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.apache.hadoop.ozone.recon.scm.ReconContainerManager;
+import org.apache.hadoop.ozone.recon.scm.ReconNodeManager;
+import org.apache.hadoop.ozone.recon.scm.ReconPipelineManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.inject.Inject;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import java.util.List;
+
+/**
+ * Endpoint to fetch current state of ozone cluster.
+ */
+@Path("/clusterState")
+@Produces(MediaType.APPLICATION_JSON)
+public class ClusterStateEndpoint {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ClusterStateEndpoint.class);
+
+  private ReconNodeManager nodeManager;
+  private ReconPipelineManager pipelineManager;
+  private ReconContainerManager containerManager;
+  private ReconOMMetadataManager omMetadataManager;
+
+  @Inject
+  ClusterStateEndpoint(OzoneStorageContainerManager reconSCM,
+   ReconOMMetadataManager omMetadataManager) {
+this.nodeManager =
+(ReconNodeManager) reconSCM.getScmNodeManager();
+this.pipelineManager = (ReconPipelineManager) 
reconSCM.getPipelineManager();
+this.containerManager =
+(ReconContainerManager) reconSCM.getContainerManager();
+this.omMetadataManager = omMetadataManager;
+  }
+
+  /**
+   * Return a summary report on current cluster state.
+   * @return {@link Response}
+   */
+  @GET
+  public Response getClusterState() {
+List datanodeDetails = nodeManager.getAllNodes();
+int containers = this.containerManager.getContainerIDs().size();
+int pipelines = this.pipelineManager.getPipelines().size();
+int healthyDatanodes = nodeManager.getNodeCount(NodeState.HEALTHY);
+SCMNodeStat stats = nodeManager.getStats();
+DatanodeStorageReport storageReport =
 
 Review comment:
   Unused variable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST API to serve Recon Dashboard and integrate with UI in Recon.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #681: HDDS-3153. Create REST 
API to serve Recon Dashboard and integrate with UI in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/681#discussion_r393199909
 
 

 ##
 File path: 
hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
 ##
 @@ -305,4 +419,39 @@ public void testGetPipelines() throws Exception {
   return (pipelineMetadata1.getContainers() == 1);
 });
   }
+
+  @Test
+  public void testGetClusterState() throws Exception {
+Response response = clusterStateEndpoint.getClusterState();
+ClusterStateResponse clusterStateResponse =
+(ClusterStateResponse) response.getEntity();
+
+Assert.assertEquals(1, clusterStateResponse.getPipelines());
+Assert.assertEquals(2, clusterStateResponse.getVolumes());
+Assert.assertEquals(2, clusterStateResponse.getBuckets());
+Assert.assertEquals(3, clusterStateResponse.getKeys());
+Assert.assertEquals(2, clusterStateResponse.getTotalDatanodes());
+Assert.assertEquals(2, clusterStateResponse.getHealthyDatanodes());
+
+DatanodeStorageReport storageReport =
+clusterStateResponse.getStorageReport();
+
+
+// if container report is processed first, and pipeline does not exist
+// then container is not added until the next container report is processed
+SCMHeartbeatRequestProto heartbeatRequestProto =
+SCMHeartbeatRequestProto.newBuilder()
+.setContainerReport(containerReportsProto)
+.setDatanodeDetails(datanodeDetailsProto)
+.build();
+reconScm.getDatanodeProtocolServer()
+.sendHeartbeat(heartbeatRequestProto);
+
+LambdaTestUtils.await(3, 2000, () -> {
 
 Review comment:
   Line 440 - 454 has some some repetitive elements in all 3 tests. Only the 
condition is different. Maybe we can use a helper function to do this and 
remove duplicate code.
   
   `  private void waitAndCheckConditionAfterHeartbeat(Callable check) 
throws Exception {
   // if container report is processed first, and pipeline does not exist
   // then container is not added until the next container report is 
processed
   SCMHeartbeatRequestProto heartbeatRequestProto =
   SCMHeartbeatRequestProto.newBuilder()
   .setContainerReport(containerReportsProto)
   .setDatanodeDetails(datanodeDetailsProto)
   .build();
   reconScm.getDatanodeProtocolServer()
   .sendHeartbeat(heartbeatRequestProto);
   LambdaTestUtils.await(3, 2000, check);
 } `
   
   It can be invoked by
   `waitAndCheckConditionAfterHeartbeat(() -> {
 Response response1 = clusterStateEndpoint.getClusterState();
 ClusterStateResponse clusterStateResponse1 =
 (ClusterStateResponse) response1.getEntity();
 return (clusterStateResponse1.getContainers() == 1);
   });`
   
   I am OK to do this in the next JIRA if needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #689: HDDS-3183. Deprecate old Recon HTTP Server Keytab config key.

2020-03-16 Thread GitBox
avijayanhwx commented on a change in pull request #689: HDDS-3183. Deprecate 
old Recon HTTP Server Keytab config key.
URL: https://github.com/apache/hadoop-ozone/pull/689#discussion_r393192877
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServer.java
 ##
 @@ -66,6 +67,9 @@ public static void main(String[] args) {
   @Override
   public Void call() throws Exception {
 OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
+Configuration.addDeprecation(
+ReconServerConfigKeys.OZONE_RECON_HTTP_KEYTAB_FILE_OLD,
 
 Review comment:
   Maybe we can add a method 'addConfigurationOverrides' in the 
ConfigurationProvider provider and move this logic there? When we have more 
deprecated keys in the future, it may be easier to maintain them there. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3216) Revisit all the flags OzoneContract.xml tests to make sure all the contract options are covered

2020-03-16 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-3216:
---

 Summary: Revisit all the flags OzoneContract.xml tests to make 
sure all the contract options are covered
 Key: HDDS-3216
 URL: https://issues.apache.org/jira/browse/HDDS-3216
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Mukul Kumar Singh


Revisit all the flags OzoneContract tests xml at 
https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/integration-test/src/test/resources/contract/ozone.xml.

We need to ensure that all the options in Contract tests are covered

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2995) Add integration test for Recon's Passive SCM state.

2020-03-16 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2995:

Status: Patch Available  (was: Open)

> Add integration test for Recon's Passive SCM state.
> ---
>
> Key: HDDS-2995
> URL: https://issues.apache.org/jira/browse/HDDS-2995
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-2995-001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> * Verify Recon gets pipeline, node and container report from Datanode.
> * Verify SCM metadata state == Recon metadata state (Create pipeline , Close 
> pipeline, create container)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3183) Deprecate old Recon HTTP Server Keytab config key

2020-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3183:
-
Labels: pull-request-available  (was: )

> Deprecate old Recon HTTP Server Keytab config key
> -
>
> Key: HDDS-3183
> URL: https://issues.apache.org/jira/browse/HDDS-3183
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: pull-request-available
>
> The current config key for Recon HTTP Server Keytab file is 
> `ozone.recon.keytab.file`. It needs to renamed to 
> `ozone.recon.http.kerberos.keytab.file` for consistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on issue #689: HDDS-3183. Deprecate old Recon HTTP Server Keytab config key.

2020-03-16 Thread GitBox
vivekratnavel commented on issue #689: HDDS-3183. Deprecate old Recon HTTP 
Server Keytab config key.
URL: https://github.com/apache/hadoop-ozone/pull/689#issuecomment-599637201
 
 
   @avijayanhwx Please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel opened a new pull request #689: HDDS-3183. Deprecate old Recon HTTP Server Keytab config key.

2020-03-16 Thread GitBox
vivekratnavel opened a new pull request #689: HDDS-3183. Deprecate old Recon 
HTTP Server Keytab config key.
URL: https://github.com/apache/hadoop-ozone/pull/689
 
 
   ## What changes were proposed in this pull request?
   
   - Deprecate old config: `ozone.recon.keytab.file` 
   - Introduce new config: `ozone.recon.http.kerberos.keytab.file`
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3183
   
   ## How was this patch tested?
   
   This patch was tested in a real cluster with both old config and new config. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #673: HDDS-3064. Get Key is hung when READ delay is injected in chunk file path.

2020-03-16 Thread GitBox
elek commented on issue #673: HDDS-3064. Get Key is hung when READ delay is 
injected in chunk file path.
URL: https://github.com/apache/hadoop-ozone/pull/673#issuecomment-599635767
 
 
   > It was reproduced in fault injection testing environment. @nilotpalnandi , 
can you please add some details.?
   
   I would appreciate it. I would like to test the patch and the behavior but 
couldn't do it without having the steps to reproduce the problems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3183) Deprecate old Recon HTTP Server Keytab config key

2020-03-16 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-3183:
-
Summary: Deprecate old Recon HTTP Server Keytab config key  (was: Rename 
Recon HTTP Server Keytab config key)

> Deprecate old Recon HTTP Server Keytab config key
> -
>
> Key: HDDS-3183
> URL: https://issues.apache.org/jira/browse/HDDS-3183
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> The current config key for Recon HTTP Server Keytab file is 
> `ozone.recon.keytab.file`. It needs to renamed to 
> `ozone.recon.http.kerberos.keytab.file` for consistency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3150) Implement getIfExist in Table and use it in CreateKey/File

2020-03-16 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-3150:
--
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Implement getIfExist in Table and use it in CreateKey/File
> --
>
> Key: HDDS-3150
> URL: https://issues.apache.org/jira/browse/HDDS-3150
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With replay, now we use directly get() API.
> Previously the code
> OMKeyRequest.java
>  
> {code:java}
> else if (omMetadataManager.getKeyTable().isExist(dbKeyName)) {
>  // TODO: Need to be fixed, as when key already exists, we are
>  // appending new blocks to existing key.
>  keyInfo = omMetadataManager.getKeyTable().get(dbKeyName);{code}
>  
> Now for every create key/File we use get API, this is changed for replay
> {code:java}
> OmKeyInfo dbKeyInfo =
>  omMetadataManager.getKeyTable().get(dbKeyName);
> if (dbKeyInfo != null) {{code}
> The proposal is to change get with getIfExist, and make use of keyMayExist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >