[jira] [Updated] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2667:
---
Description: 
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

{code}
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"
{code}

The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
formed by the other 3 datanodes which are all DOWN. 

  was:
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"

The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
formed by the other 3 datanodes which are all DOWN. 


> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Attila Doroszlai
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> {code}
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> {code}
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2667:
--

Assignee: Attila Doroszlai

> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Attila Doroszlai
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988553#comment-16988553
 ] 

Attila Doroszlai commented on HDDS-2667:


Thanks [~Sammi] for reporting this.  I can reproduce it by enabling RocksDB 
stats.  Can you please confirm if you have 
{{ozone.metastore.rocksdb.statistics}} enabled in {{ozone-site.xml}}?

> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-2667:
-
Description: 
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"

The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
formed by the other 3 datanodes which are all DOWN. 

  was:
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"




> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-2667:
-
Attachment: promethues-targets.png

> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-2667:


 Summary: Promethues reports invalid metric type
 Key: HDDS-2667
 URL: https://issues.apache.org/jira/browse/HDDS-2667
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Sammi Chen


Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in RandomKeyGenerator

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2578:

Status: Patch Available  (was: In Progress)

> Handle InterruptedException in RandomKeyGenerator
> -
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2565) Handle InterruptedException in VolumeSet

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2565:

Status: Patch Available  (was: Open)

> Handle InterruptedException in VolumeSet
> 
>
> Key: HDDS-2565
> URL: https://issues.apache.org/jira/browse/HDDS-2565
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in RandomKeyGenerator

2019-12-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2578:
-
Labels: newbie pull-request-available sonar  (was: newbie sonar)

> Handle InterruptedException in RandomKeyGenerator
> -
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>
>  RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia opened a new pull request #305: HDDS-2578. Handle InterruptedException in RandomKeyGenerator

2019-12-04 Thread GitBox
dineshchitlangia opened a new pull request #305: HDDS-2578. Handle 
InterruptedException in RandomKeyGenerator
URL: https://github.com/apache/hadoop-ozone/pull/305
 
 
   ## What changes were proposed in this pull request?
   Fix sonar issue.
   Re-interrupt current thread when InterruptedException was encountered.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2578
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   mvn install to ensure it builds cleanly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2504) Handle InterruptedException properly

2019-12-04 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988499#comment-16988499
 ] 

Dinesh Chitlangia commented on HDDS-2504:
-

[~cxorm] Using {{Thread.currentThread().interrupt();}} makes sense.

> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2504) Handle InterruptedException properly

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2504:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2556) Handle InterruptedException in BlockOutputStream

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2556:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException in BlockOutputStream
> 
>
> Key: HDDS-2556
> URL: https://issues.apache.org/jira/browse/HDDS-2556
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix these 5 instances
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2555) Handle InterruptedException in XceiverClientGrpc

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2555:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException in XceiverClientGrpc
> 
>
> Key: HDDS-2555
> URL: https://issues.apache.org/jira/browse/HDDS-2555
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix these 3 instances
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in RandomKeyGenerator

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2578:

Description: 
 RandomKeyGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]

 

  was:
BaseFreonGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]

 

RandomKeyGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]

 

ProgressBar: 3 instances listed below

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]

 


> Handle InterruptedException in RandomKeyGenerator
> -
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
>  RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in RandomKeyGenerator

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2578:

Summary: Handle InterruptedException in RandomKeyGenerator  (was: Handle 
InterruptedException in Freon package)

> Handle InterruptedException in RandomKeyGenerator
> -
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> BaseFreonGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]
>  
> RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  
> ProgressBar: 3 instances listed below
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2578) Handle InterruptedException in Freon package

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2578:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException in Freon package
> 
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> BaseFreonGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]
>  
> RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  
> ProgressBar: 3 instances listed below
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2574) Handle InterruptedException in OzoneDelegationTokenSecretManager

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2574:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException in OzoneDelegationTokenSecretManager
> 
>
> Key: HDDS-2574
> URL: https://issues.apache.org/jira/browse/HDDS-2574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-2572:
---

Assignee: Dinesh Chitlangia

> Handle InterruptedException in SCMSecurityProtocolServer
> 
>
> Key: HDDS-2572
> URL: https://issues.apache.org/jira/browse/HDDS-2572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2012) Support GDPR-Right to Erasure feature on Ozone

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2012.
-
   Fix Version/s: 0.5.0
Release Note: GDPR Right to Erasure feature set with known limitations 
for backup/restore scenarios and existing buckets would need manual copying of 
keys.
Target Version/s: 0.5.0
  Resolution: Implemented

> Support GDPR-Right to Erasure feature on Ozone
> --
>
> Key: HDDS-2012
> URL: https://issues.apache.org/jira/browse/HDDS-2012
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: compliance
> Fix For: 0.5.0
>
> Attachments: Ozone GDPR Framework.pdf, Ozone GDPR 
> Framework_updated.pdf
>
>
> While several aspects of GDPR can be achieved by various ways in Ozone, the 
> [Right to 
> Erasure|https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/]
>  support will need some work.
> This is an umbrella jira to add support for this feature to Ozone.
> Design doc is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2012) Support GDPR-Right to Erasure feature on Ozone

2019-12-04 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988482#comment-16988482
 ] 

Dinesh Chitlangia edited comment on HDDS-2012 at 12/5/19 5:36 AM:
--

The updated design doc has been attached.
 Thank you [~aengineer], [~arp], [~xyao] for your valuable inputs/feedback.

As all sub-tasks have been implemented, I am resolving this jira.

 


was (Author: dineshchitlangia):
The updated design doc has been attached.
Thank you [~aengineer], [~arp], [~xyao] for your valuable inputs/feedback.

 

> Support GDPR-Right to Erasure feature on Ozone
> --
>
> Key: HDDS-2012
> URL: https://issues.apache.org/jira/browse/HDDS-2012
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: compliance
> Attachments: Ozone GDPR Framework.pdf, Ozone GDPR 
> Framework_updated.pdf
>
>
> While several aspects of GDPR can be achieved by various ways in Ozone, the 
> [Right to 
> Erasure|https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/]
>  support will need some work.
> This is an umbrella jira to add support for this feature to Ozone.
> Design doc is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2012) Support GDPR-Right to Erasure feature on Ozone

2019-12-04 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988482#comment-16988482
 ] 

Dinesh Chitlangia commented on HDDS-2012:
-

The updated design doc has been attached.
Thank you [~aengineer], [~arp], [~xyao] for your valuable inputs/feedback.

 

> Support GDPR-Right to Erasure feature on Ozone
> --
>
> Key: HDDS-2012
> URL: https://issues.apache.org/jira/browse/HDDS-2012
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: compliance
> Attachments: Ozone GDPR Framework.pdf, Ozone GDPR 
> Framework_updated.pdf
>
>
> While several aspects of GDPR can be achieved by various ways in Ozone, the 
> [Right to 
> Erasure|https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/]
>  support will need some work.
> This is an umbrella jira to add support for this feature to Ozone.
> Design doc is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2012) Support GDPR-Right to Erasure feature on Ozone

2019-12-04 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2012:

Attachment: Ozone GDPR Framework_updated.pdf

> Support GDPR-Right to Erasure feature on Ozone
> --
>
> Key: HDDS-2012
> URL: https://issues.apache.org/jira/browse/HDDS-2012
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client, Ozone Manager
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: compliance
> Attachments: Ozone GDPR Framework.pdf, Ozone GDPR 
> Framework_updated.pdf
>
>
> While several aspects of GDPR can be achieved by various ways in Ozone, the 
> [Right to 
> Erasure|https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/]
>  support will need some work.
> This is an umbrella jira to add support for this feature to Ozone.
> Design doc is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2622) Add config parameter for setting limit on total bytes of pending requests in Ratis

2019-12-04 Thread Lokesh Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-2622.
---
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add config parameter for setting limit on total bytes of pending requests in 
> Ratis
> --
>
> Key: HDDS-2622
> URL: https://issues.apache.org/jira/browse/HDDS-2622
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> RATIS-714 introduced a config setting for limiting the max number of bytes of 
> pending requests. This Jira aims to add a config in Ozone to set the same in 
> DN Ratis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 commented on issue #264: HDDS-2622. Add config parameter for setting limit on total bytes of pending requests in Ratis.

2019-12-04 Thread GitBox
lokeshj1703 commented on issue #264: HDDS-2622. Add config parameter for 
setting limit on total bytes of pending requests in Ratis.
URL: https://github.com/apache/hadoop-ozone/pull/264#issuecomment-561974431
 
 
   @hanishakoneru Thanks for the contribution! I have merged the PR to master 
branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 merged pull request #264: HDDS-2622. Add config parameter for setting limit on total bytes of pending requests in Ratis.

2019-12-04 Thread GitBox
lokeshj1703 merged pull request #264: HDDS-2622. Add config parameter for 
setting limit on total bytes of pending requests in Ratis.
URL: https://github.com/apache/hadoop-ozone/pull/264
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on issue #268: HDDS-2565. Handle InterruptedException in VolumeSet

2019-12-04 Thread GitBox
cxorm commented on issue #268: HDDS-2565. Handle InterruptedException in 
VolumeSet
URL: https://github.com/apache/hadoop-ozone/pull/268#issuecomment-561950757
 
 
   The acceptance test failure does not seem to be related to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on issue #236: HDDS-2284. XceiverClientMetrics should be initialised as part of XceiverClientManager constructor.

2019-12-04 Thread GitBox
mukul1987 commented on issue #236: HDDS-2284. XceiverClientMetrics should be 
initialised as part of XceiverClientManager constructor.
URL: https://github.com/apache/hadoop-ozone/pull/236#issuecomment-561949853
 
 
   Thanks for the review @anuengineer, I realized the issue and have fixed it 
in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #282: HDDS-2646. Start acceptance tests only if at least one THREE pipeline is available

2019-12-04 Thread GitBox
ChenSammi commented on a change in pull request #282: HDDS-2646. Start 
acceptance tests only if at least one THREE pipeline is available
URL: https://github.com/apache/hadoop-ozone/pull/282#discussion_r354087388
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone-mr/common-config
 ##
 @@ -22,6 +22,7 @@ OZONE-SITE.XML_ozone.scm.block.client.address=scm
 OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
 OZONE-SITE.XML_ozone.scm.client.address=scm
 OZONE-SITE.XML_ozone.replication=3
+OZONE-SITE.XML_hdds.scm.safemode.min.datanode=3
 
 Review comment:
   I see. Thanks for the explanation. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1993) Merge OzoneManagerRequestHandler and OzoneManagerHARequestHandlerImpl

2019-12-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1993:
-
Status: Patch Available  (was: In Progress)

> Merge OzoneManagerRequestHandler and OzoneManagerHARequestHandlerImpl
> -
>
> Key: HDDS-1993
> URL: https://issues.apache.org/jira/browse/HDDS-1993
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HA and Non-HA code are merged to use newly OM HA code. We can merge 
> these classes, and remove the unused code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1993) Merge OzoneManagerRequestHandler and OzoneManagerHARequestHandlerImpl

2019-12-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1993:
-
Labels: pull-request-available  (was: )

> Merge OzoneManagerRequestHandler and OzoneManagerHARequestHandlerImpl
> -
>
> Key: HDDS-1993
> URL: https://issues.apache.org/jira/browse/HDDS-1993
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Once HA and Non-HA code are merged to use newly OM HA code. We can merge 
> these classes, and remove the unused code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #304: HDDS-1993. Merge OzoneManagerRequestHandler and OzoneManagerHARequest…

2019-12-04 Thread GitBox
bharatviswa504 opened a new pull request #304: HDDS-1993. Merge 
OzoneManagerRequestHandler and OzoneManagerHARequest…
URL: https://github.com/apache/hadoop-ozone/pull/304
 
 
   ## What changes were proposed in this pull request?
   
   Merge HARequestHandler and non-HA request handler.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-1993
   
   
   ## How was this patch tested?
   
   Ran a few integration tests which execute this code path. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2666) Suppress loader constraint violation message in TestOzoneFileSystemWithMocks

2019-12-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2666:
-
Labels: pull-request-available  (was: )

> Suppress loader constraint violation message in TestOzoneFileSystemWithMocks
> 
>
> Key: HDDS-2666
> URL: https://issues.apache.org/jira/browse/HDDS-2666
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> {{TestOzoneFileSystemWithMocks}} throws LinkageError error when run (but test 
> succeeds):
> {code}
> ERROR StatusLogger Could not reconfigure JMX
>  java.lang.LinkageError: loader constraint violation: loader (instance of 
> org/powermock/core/classloader/MockClassLoader) previously initiated loading 
> for a different type with name "javax/management/MBeanServer"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
>   at 
> org.powermock.core.classloader.MockClassLoader.loadUnmockedClass(MockClassLoader.java:250)
>   at 
> org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:194)
>   at 
> org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:71)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at 
> org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:335)
>   at 
> org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:259)
>   at 
> org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:164)
>   at 
> org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
>   at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
>   at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:619)
>   at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
>   at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
>   at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>   at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>   at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
>   at 
> org.apache.commons.logging.LogAdapter$Log4jLog.(LogAdapter.java:135)
>   at 
> org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:102)
>   at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:79)
>   at 
> org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:46)
>   at 
> org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:41)
>   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
>   at org.apache.hadoop.fs.FileSystem.(FileSystem.java:136)
>   at 
> org.apache.hadoop.fs.ozone.TestOzoneFileSystemWithMocks.testFSUriWithHostPortOverrides(TestOzoneFileSystemWithMocks.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:300)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>   at 
> 

[GitHub] [hadoop-ozone] smengcl opened a new pull request #303: HDDS-2666. Suppress loader constraint violation message in TestOzoneFileSystemWithMocks

2019-12-04 Thread GitBox
smengcl opened a new pull request #303: HDDS-2666. Suppress loader constraint 
violation message in TestOzoneFileSystemWithMocks
URL: https://github.com/apache/hadoop-ozone/pull/303
 
 
   ## What changes were proposed in this pull request?
   
   Suppress loader constraint violation error message in 
`TestOzoneFileSystemWithMocks`:
   ```
   ERROR StatusLogger Could not reconfigure JMX
java.lang.LinkageError: loader constraint violation: loader (instance of 
org/powermock/core/classloader/MockClassLoader) previously initiated loading 
for a different type with name "javax/management/MBeanServer"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at 
org.powermock.core.classloader.MockClassLoader.loadUnmockedClass(MockClassLoader.java:250)
at 
org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:194)
at 
org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:71)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:335)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:259)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:164)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
at 
org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:619)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
   ...
   ```
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2666
   
   ## How was this patch tested?
   
   Run TestOzoneFileSystemWithMocks. Should no longer print 
`java.lang.LinkageError: loader constraint violation`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2666) Suppress loader constraint violation message in TestOzoneFileSystemWithMocks

2019-12-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2666:
-
Summary: Suppress loader constraint violation message in 
TestOzoneFileSystemWithMocks  (was: Suppress loader constraint violation in 
TestOzoneFileSystemWithMocks)

> Suppress loader constraint violation message in TestOzoneFileSystemWithMocks
> 
>
> Key: HDDS-2666
> URL: https://issues.apache.org/jira/browse/HDDS-2666
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> {{TestOzoneFileSystemWithMocks}} throws LinkageError error when run (but test 
> succeeds):
> {code}
> ERROR StatusLogger Could not reconfigure JMX
>  java.lang.LinkageError: loader constraint violation: loader (instance of 
> org/powermock/core/classloader/MockClassLoader) previously initiated loading 
> for a different type with name "javax/management/MBeanServer"
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
>   at 
> org.powermock.core.classloader.MockClassLoader.loadUnmockedClass(MockClassLoader.java:250)
>   at 
> org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:194)
>   at 
> org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:71)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at 
> org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:335)
>   at 
> org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:259)
>   at 
> org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:164)
>   at 
> org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
>   at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
>   at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:619)
>   at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
>   at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
>   at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
>   at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
>   at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
>   at 
> org.apache.commons.logging.LogAdapter$Log4jLog.(LogAdapter.java:135)
>   at 
> org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:102)
>   at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:79)
>   at 
> org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:46)
>   at 
> org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:41)
>   at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
>   at org.apache.hadoop.fs.FileSystem.(FileSystem.java:136)
>   at 
> org.apache.hadoop.fs.ozone.TestOzoneFileSystemWithMocks.testFSUriWithHostPortOverrides(TestOzoneFileSystemWithMocks.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:300)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>   at 
> 

[jira] [Updated] (HDDS-2666) Suppress loader constraint violation in TestOzoneFileSystemWithMocks

2019-12-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2666:
-
Description: 
{{TestOzoneFileSystemWithMocks}} throws LinkageError error when run (but test 
succeeds):

{code}
ERROR StatusLogger Could not reconfigure JMX
 java.lang.LinkageError: loader constraint violation: loader (instance of 
org/powermock/core/classloader/MockClassLoader) previously initiated loading 
for a different type with name "javax/management/MBeanServer"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at 
org.powermock.core.classloader.MockClassLoader.loadUnmockedClass(MockClassLoader.java:250)
at 
org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:194)
at 
org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:71)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:335)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:259)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:164)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
at 
org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:619)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at 
org.apache.commons.logging.LogAdapter$Log4jLog.(LogAdapter.java:135)
at 
org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:102)
at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:79)
at 
org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:46)
at 
org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:41)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FileSystem.(FileSystem.java:136)
at 
org.apache.hadoop.fs.ozone.TestOzoneFileSystemWithMocks.testFSUriWithHostPortOverrides(TestOzoneFileSystemWithMocks.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:300)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:288)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:86)
at 

[jira] [Created] (HDDS-2666) Suppress loader constraint violation in TestOzoneFileSystemWithMocks

2019-12-04 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-2666:


 Summary: Suppress loader constraint violation in 
TestOzoneFileSystemWithMocks
 Key: HDDS-2666
 URL: https://issues.apache.org/jira/browse/HDDS-2666
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng


{{TestOzoneFileSystemWithMocks}} throws LinkageError error when run (but test 
succeeds):

{code}
ERROR StatusLogger Could not reconfigure JMX
 java.lang.LinkageError: loader constraint violation: loader (instance of 
org/powermock/core/classloader/MockClassLoader) previously initiated loading 
for a different type with name "javax/management/MBeanServer"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at 
org.powermock.core.classloader.MockClassLoader.loadUnmockedClass(MockClassLoader.java:250)
at 
org.powermock.core.classloader.MockClassLoader.loadModifiedClass(MockClassLoader.java:194)
at 
org.powermock.core.classloader.DeferSupportingClassLoader.loadClass(DeferSupportingClassLoader.java:71)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterAllMatching(Server.java:335)
at 
org.apache.logging.log4j.core.jmx.Server.unregisterLoggerContext(Server.java:259)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:164)
at 
org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
at 
org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:619)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:636)
at 
org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:231)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at 
org.apache.commons.logging.LogAdapter$Log4jLog.(LogAdapter.java:135)
at 
org.apache.commons.logging.LogAdapter$Log4jAdapter.createLog(LogAdapter.java:102)
at org.apache.commons.logging.LogAdapter.createLog(LogAdapter.java:79)
at 
org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:46)
at 
org.apache.commons.logging.LogFactoryService.getInstance(LogFactoryService.java:41)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FileSystem.(FileSystem.java:136)
at 
org.apache.hadoop.fs.ozone.TestOzoneFileSystemWithMocks.testFSUriWithHostPortOverrides(TestOzoneFileSystemWithMocks.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:88)
at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:96)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:300)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
at 

[jira] [Assigned] (HDDS-2661) Use field based Config annotation instead of method based

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2661:
--

Assignee: Marton Elek

> Use field based Config annotation instead of method based
> -
>
> Key: HDDS-2661
> URL: https://issues.apache.org/jira/browse/HDDS-2661
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-2413 proposes an additional usage of the @Config annotation: to set 
> configuration based on an existing configuration class.
> But as of now we annotate the setters instead of the fields. To avoid 
> annotation duplication (we need to read the values from the getters or the 
> fields) I propose to switch to use field based annotations instead of setter 
> based annotation.
> I think it's more readable and additional validation (even the class level 
> validations) can be done in a @PostConstruct method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #236: HDDS-2284. XceiverClientMetrics should be initialised as part of XceiverClientManager constructor.

2019-12-04 Thread GitBox
anuengineer commented on issue #236: HDDS-2284. XceiverClientMetrics should be 
initialised as part of XceiverClientManager constructor.
URL: https://github.com/apache/hadoop-ozone/pull/236#issuecomment-561832338
 
 
   @mukul1987  The failures looks like it is related to this patch in the 
acceptance test. Can you please take a look when you get a chance?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on issue #148: HDDS-2105. Merge OzoneClientFactory#getRpcClient functions

2019-12-04 Thread GitBox
smengcl edited a comment on issue #148: HDDS-2105. Merge 
OzoneClientFactory#getRpcClient functions
URL: https://github.com/apache/hadoop-ozone/pull/148#issuecomment-561825329
 
 
   acceptance test failure might be related. Looking into it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #148: HDDS-2105. Merge OzoneClientFactory#getRpcClient functions

2019-12-04 Thread GitBox
smengcl commented on issue #148: HDDS-2105. Merge 
OzoneClientFactory#getRpcClient functions
URL: https://github.com/apache/hadoop-ozone/pull/148#issuecomment-561825329
 
 
   Seems the acceptance test is related. Looking into it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #292: HDDS-2645. Refactor MiniOzoneChaosCluster to a different package to add filesystem tests.

2019-12-04 Thread GitBox
anuengineer commented on issue #292: HDDS-2645. Refactor MiniOzoneChaosCluster 
to a different package to add filesystem tests.
URL: https://github.com/apache/hadoop-ozone/pull/292#issuecomment-561824964
 
 
   @mukul1987  There are 2 checkstyle issues -- 
   ` [ERROR] 
src/test/java/org/apache/hadoop/ozone/TestMiniChaosOzoneCluster.java:[92] 
(sizes) LineLength: Line is longer than 80 characters (found 81).
   [ERROR] src/test/java/org/apache/hadoop/ozone/utils/LoadBucket.java:[63,23] 
(whitespace) ParenPad: '(' is followed by whitespace.
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2646) Start acceptance tests only if at least one THREE pipeline is available

2019-12-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2646.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to master. Thanks for the contribution.

> Start acceptance tests only if at least one THREE pipeline is available
> ---
>
> Key: HDDS-2646
> URL: https://issues.apache.org/jira/browse/HDDS-2646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: docker-ozoneperf-ozoneperf-basic-scm.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After HDDS-2034 (or even before?) pipeline creation (or the status transition 
> from ALLOCATE to OPEN) requires at least one pipeline report from all of the 
> datanodes. Which means that the cluster might not be usable even if it's out 
> from the safe mode AND there are at least three datanodes.
> It makes all the acceptance tests unstable.
> For example in 
> [this|https://github.com/apache/hadoop-ozone/pull/263/checks?check_run_id=324489319]
>  run.
> {code:java}
> scm_1 | 2019-11-28 11:22:54,401 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode 548f146f-2166-440a-b9f1-83086591ae26
> scm_1 | 2019-11-28 11:22:54,402 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c
> scm_1 | 2019-11-28 11:22:54,404 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode 47dbb8e4-bbde-4164-a798-e47e8c696fb5
> scm_1 | 2019-11-28 11:22:54,405 INFO pipeline.PipelineStateManager: 
> Created pipeline Pipeline[ Id: 8dc4aeb6-5ae2-46a0-948d-287c97dd81fb, Nodes: 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}47dbb8e4-bbde-4164-a798-e47e8c696fb5{ip: 172.24.0.2, host: 
> ozoneperf_datanode_2.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}, Type:RATIS, Factor:THREE, State:ALLOCATED]
> scm_1 | 2019-11-28 11:22:56,975 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:22:58,018 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:01,871 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:02,817 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:02,847 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null} {code}
> As you can see the pipeline is created but the the cluster is not usable as 
> it's not yet reporter back by datanode_2:
> {code:java}
> scm_1 | 2019-11-28 11:23:13,879 WARN block.BlockManagerImpl: Pipeline 
> creation failed for type:RATIS factor:THREE. Retrying get pipelines c
> all once.
> scm_1 | 
> org.apache.hadoop.hdds.scm.pipeline.InsufficientDatanodesException: Cannot 
> create pipeline of factor 3 using 0 nodes.{code}
>  The quick fix is to configure all the compose clusters to wait until one 
> pipeline is available. This can be done by adjusting the number of the 
> required datanodes:
> {code:java}
> // We only care about THREE replica pipeline
> int 

[GitHub] [hadoop-ozone] anuengineer commented on issue #282: HDDS-2646. Start acceptance tests only if at least one THREE pipeline is available

2019-12-04 Thread GitBox
anuengineer commented on issue #282: HDDS-2646. Start acceptance tests only if 
at least one THREE pipeline is available
URL: https://github.com/apache/hadoop-ozone/pull/282#issuecomment-561823350
 
 
   @elek and @adoroszlai  Thanks for explaining this patch and next patch in 
pipeline to me. Appreciate it. I have committed this patch to the master. 
@ChenSammi  Thanks for the review.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #282: HDDS-2646. Start acceptance tests only if at least one THREE pipeline is available

2019-12-04 Thread GitBox
anuengineer merged pull request #282: HDDS-2646. Start acceptance tests only if 
at least one THREE pipeline is available
URL: https://github.com/apache/hadoop-ozone/pull/282
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2019-12-04 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2665:
-
Summary: Implement new Ozone Filesystem scheme ofs://  (was: Implement 
ofs:// scheme)

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #296: HDDS-2659. KeyValueHandler#handleCreateContainer should log the exception on container creation failure

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #296: HDDS-2659. 
KeyValueHandler#handleCreateContainer should log the exception on container 
creation failure
URL: https://github.com/apache/hadoop-ozone/pull/296#discussion_r353955804
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
 ##
 @@ -146,6 +146,7 @@ public static ContainerCommandResponseProto 
logAndReturnError(
 log.info("Operation: {} , Trace ID: {} , Message: {} , Result: {}",
 request.getCmdType().name(), request.getTraceID(),
 ex.getMessage(), ex.getResult().getValueDescriptor().getName());
+log.error("StorageContainerException Occurred!", ex);
 
 Review comment:
   We already have a log.info just above it with exception message and the 
result. If more information about the exception is needed, can we consolidate 
it into the previous message. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2474) Remove OzoneClient exception Precondition check

2019-12-04 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988171#comment-16988171
 ] 

Anu Engineer commented on HDDS-2474:


Just did a bunch of activity on this JIRA to indicate that this should not be 
committed unless [~hanishakoneru] explicitly requests it. Thanks. Please see 
the Github issue comments for more details.

> Remove OzoneClient exception Precondition check
> ---
>
> Key: HDDS-2474
> URL: https://issues.apache.org/jira/browse/HDDS-2474
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If RaftCleintReply encounters an exception other than NotLeaderException, 
> NotReplicatedException, StateMachineException or LeaderNotReady, then it sets 
> success to false but there is no exception set. This causes a Precondition 
> check failure in XceiverClientRatis which expects that there should be an 
> exception if success=false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #234: HDDS-2531. Sonar : remove duplicate string literals in BlockOutputStream

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #234: HDDS-2531. Sonar : remove 
duplicate string literals in BlockOutputStream
URL: https://github.com/apache/hadoop-ozone/pull/234#discussion_r353954352
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -609,11 +603,8 @@ private void writeChunkToContainer(ChunkBuffer chunk) 
throws IOException {
 }
 return e;
   }, responseExecutor).exceptionally(e -> {
-if (LOG.isDebugEnabled()) {
-  LOG.debug(
-  "writing chunk failed " + chunkInfo.getChunkName() + " blockID "
-  + blockID + " with exception " + e.getLocalizedMessage());
-}
+LOG.debug("writing chunk failed {} blockID {} with exception {}",
 
 Review comment:
   This is part of the I/O path where we have seen perf issue without the if 
(LOG.isDebugEnabled()), which slf4j claim that parameterize log will solve. So 
my suggestion is to keep the  if (LOG.isDebugEnabled()) and ignore the Sonar 
warning if any here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer closed pull request #157: HDDS-2474. Remove OzoneClient exception Precondition check.

2019-12-04 Thread GitBox
anuengineer closed pull request #157: HDDS-2474. Remove OzoneClient exception 
Precondition check.
URL: https://github.com/apache/hadoop-ozone/pull/157
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #286: HDDS-2475. Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-04 Thread GitBox
adoroszlai commented on issue #286: HDDS-2475. Unregister 
ContainerMetadataScrubberMetrics on thread exit
URL: https://github.com/apache/hadoop-ozone/pull/286#issuecomment-561813777
 
 
   Thanks @xiaoyuyao for the review and @anuengineer for committing it.
   
   I'll resolve HDDS-2664, since all followup issues were addressed in 
889664c06a19520705027d9903bfd5cbec62948b and 
1233fec96c07372ae02430f2c6a3b73b2723604e.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #238: HDDS-2588. Consolidate compose environments

2019-12-04 Thread GitBox
anuengineer commented on issue #238: HDDS-2588. Consolidate compose environments
URL: https://github.com/apache/hadoop-ozone/pull/238#issuecomment-561812626
 
 
   > It's -- at least partially -- a philosophical question, what is Ozone. 
(And as it's philosophy, I am interested about the opinion of our philosopher 
of Ozone cc @anuengineer)
   
   I am okay with what this patch says -- in reality, irrespective of what we 
say, there will be monitoring , tracing and logging collectors in place for 
most data centers. So irrespective of what we do (Prometheus, Jaeger, Grafana, 
Fluentd) the system admins will do the right thing for them. 
   
   We are just show casing that fact that, it is trivial to do this with Ozone. 
So when someone is evaluating Ozone, the question of how can I really run this 
service in production is answered via the presence of these tools. I would go a 
step ahead add these as recipes in the Ozone documentation too.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #238: HDDS-2588. Consolidate compose environments

2019-12-04 Thread GitBox
anuengineer commented on a change in pull request #238: HDDS-2588. Consolidate 
compose environments
URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r353947914
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/run.sh
 ##
 @@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+declare -ix OZONE_REPLICATION_FACTOR
+: ${OZONE_REPLICATION_FACTOR:=1}
+docker-compose up --scale datanode=${OZONE_REPLICATION_FACTOR} --no-recreate 
"$@"
 
 Review comment:
   Well, while I agree the call for re-create is correct here. @adoroszlai  You 
did ask the question. The issue can happen if we have a cluster that was 
running that has error-ed out. Then re-running this command will not reset the 
system. But it is probably something that we can live with, or fix much later. 
I predict for a long time, when someone reports an issue, we will say " make 
sure you kill all running docker containers". But then, traditionally that is 
our first debugging step whenever someone tells us that docker based stuff is 
not stable for them. 
   
   Just to make sure, I am +1 and ok with this change. Just responding to your 
question; that is all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao merged pull request #284: HDDS-2653. Improve executor memory usage in 
new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on issue #284: HDDS-2653. Improve executor memory usage in 
new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#issuecomment-561808495
 
 
   LGTM, +1. Thanks @adoroszlai  for the improvement. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve 
executor memory usage in new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#discussion_r353939273
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -101,71 +101,102 @@
   private MetricRegistry metrics = new MetricRegistry();
 
   private AtomicLong successCounter;
-
   private AtomicLong failureCounter;
+  private AtomicLong attemptCounter;
 
   private long startTime;
 
   private PathSchema pathSchema;
+  private String spanName;
+  private ExecutorService executor;
+  private ProgressBar progressBar;
 
   /**
* The main logic to execute a test generator.
*
* @param provider creates the new steps to execute.
*/
   public void runTests(TaskProvider provider) {
+setup(provider);
+startTaskRunners(provider);
+waitForCompletion();
+shutdown();
+reportAnyFailure();
+  }
 
-ExecutorService executor = Executors.newFixedThreadPool(threadNo);
-
-ProgressBar progressBar =
-new ProgressBar(System.out, testNo, successCounter::get);
-progressBar.start();
-
-startTime = System.currentTimeMillis();
-//schedule the execution of all the tasks.
-
-for (long i = 0; i < testNo; i++) {
-
-  final long counter = i;
+  /**
+   * Performs {@code provider}-specific initialization.
+   */
+  private void setup(TaskProvider provider) {
+//provider is usually a lambda, print out only the owner class name:
+spanName = provider.getClass().getSimpleName().split("\\$")[0];
+  }
 
-  //provider is usually a lambda, print out only the owner class name:
-  String spanName = provider.getClass().getSimpleName().split("\\$")[0];
+  /**
+   * Launches {@code threadNo} task runners in executor.  Each one executes 
test
+   * tasks in a loop until completion or failure.
+   */
+  private void startTaskRunners(TaskProvider provider) {
+for (int i = 0; i < threadNo; i++) {
 
 Review comment:
   NIT: threadNo is confusing here as we are submit task with no guaranteed # 
of threads?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2664) Add Unregister calls for Metrics

2019-12-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2664:
--

 Summary: Add Unregister calls for Metrics
 Key: HDDS-2664
 URL: https://issues.apache.org/jira/browse/HDDS-2664
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Anu Engineer


During the code review of HDDS-2475, [~xyao] made the following observation. 

```
I also notice there are other metrics that never got unregistered:
SCMContainerPlacementMetrics and CSMMetrics.

```

This JIRA is to track that issue, and add logging if possible in HDDS-2475 code 
path.

[~adoroszlai]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #286: HDDS-2475. Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-04 Thread GitBox
anuengineer commented on issue #286: HDDS-2475. Unregister 
ContainerMetadataScrubberMetrics on thread exit
URL: https://github.com/apache/hadoop-ozone/pull/286#issuecomment-561805829
 
 
   Filed https://issues.apache.org/jira/browse/HDDS-2664 to track the other 
metric issue and add logging in this path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve 
executor memory usage in new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#discussion_r353941495
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -212,6 +249,13 @@ public void init() {
   }
   printReport();
 }));
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+progressBar = new ProgressBar(System.out, testNo, successCounter::get);
 
 Review comment:
   We need to ensure the progessBar/executor are shutdown at the end.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve 
executor memory usage in new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#discussion_r353941495
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -212,6 +249,13 @@ public void init() {
   }
   printReport();
 }));
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+progressBar = new ProgressBar(System.out, testNo, successCounter::get);
 
 Review comment:
   We need to ensure the progessBar/executor are shutdown at the end.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve 
executor memory usage in new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#discussion_r353941495
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -212,6 +249,13 @@ public void init() {
   }
   printReport();
 }));
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+progressBar = new ProgressBar(System.out, testNo, successCounter::get);
 
 Review comment:
   We need to ensure the pgoressBar thread is shutdown at the end.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #293: HDDS-2530. Sonar : refactor verifyResourceName in HddsClientUtils to fix Sonar errors

2019-12-04 Thread GitBox
adoroszlai commented on a change in pull request #293: HDDS-2530. Sonar : 
refactor verifyResourceName in HddsClientUtils to fix Sonar errors
URL: https://github.com/apache/hadoop-ozone/pull/293#discussion_r353938171
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 ##
 @@ -150,6 +142,49 @@ public static void verifyResourceName(String resName)
   throw new IllegalArgumentException("Bucket or Volume name "
   + "cannot end with a period or dash");
 }
+  }
+
+  private static boolean isLowercaseAlphanumeric(char c) {
+return (Character.toString(c).matches("[a-z0-9]"));
 
 Review comment:
   I think this can be improved:
   
   1. `doNameChecks` can verify that `resName` `matches()` the pattern 
`[a-z0-9.-]+`, instead of checking each character separately in 
`doCharacterChecks`
   2. `Pattern` can be pre-compiled as a constant


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #286: HDDS-2475. Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-04 Thread GitBox
anuengineer merged pull request #286: HDDS-2475. Unregister 
ContainerMetadataScrubberMetrics on thread exit
URL: https://github.com/apache/hadoop-ozone/pull/286
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #286: HDDS-2475. Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-04 Thread GitBox
anuengineer commented on issue #286: HDDS-2475. Unregister 
ContainerMetadataScrubberMetrics on thread exit
URL: https://github.com/apache/hadoop-ozone/pull/286#issuecomment-561804630
 
 
   +1. @xiaoyuyao  Thanks for the review. @adoroszlai  Thanks for the 
contribution. I will file a follow up JIRA for the other issues mentioned by 
Xiaoyu. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #288: HDDS-2506. Remove keyAllocationInfo and replication info from the auditLog

2019-12-04 Thread GitBox
anuengineer commented on issue #288: HDDS-2506. Remove keyAllocationInfo and 
replication info from the auditLog
URL: https://github.com/apache/hadoop-ozone/pull/288#issuecomment-561803904
 
 
   +1. I am good to commit this change. I will leave open till end of the day 
to see if anyone would like to comment on this. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve executor memory usage in new Freon tests

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #284: HDDS-2653. Improve 
executor memory usage in new Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/284#discussion_r353939273
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -101,71 +101,102 @@
   private MetricRegistry metrics = new MetricRegistry();
 
   private AtomicLong successCounter;
-
   private AtomicLong failureCounter;
+  private AtomicLong attemptCounter;
 
   private long startTime;
 
   private PathSchema pathSchema;
+  private String spanName;
+  private ExecutorService executor;
+  private ProgressBar progressBar;
 
   /**
* The main logic to execute a test generator.
*
* @param provider creates the new steps to execute.
*/
   public void runTests(TaskProvider provider) {
+setup(provider);
+startTaskRunners(provider);
+waitForCompletion();
+shutdown();
+reportAnyFailure();
+  }
 
-ExecutorService executor = Executors.newFixedThreadPool(threadNo);
-
-ProgressBar progressBar =
-new ProgressBar(System.out, testNo, successCounter::get);
-progressBar.start();
-
-startTime = System.currentTimeMillis();
-//schedule the execution of all the tasks.
-
-for (long i = 0; i < testNo; i++) {
-
-  final long counter = i;
+  /**
+   * Performs {@code provider}-specific initialization.
+   */
+  private void setup(TaskProvider provider) {
+//provider is usually a lambda, print out only the owner class name:
+spanName = provider.getClass().getSimpleName().split("\\$")[0];
+  }
 
-  //provider is usually a lambda, print out only the owner class name:
-  String spanName = provider.getClass().getSimpleName().split("\\$")[0];
+  /**
+   * Launches {@code threadNo} task runners in executor.  Each one executes 
test
+   * tasks in a loop until completion or failure.
+   */
+  private void startTaskRunners(TaskProvider provider) {
+for (int i = 0; i < threadNo; i++) {
 
 Review comment:
   NIT: threadNo is confusing here as we are submit task with no guaranteed # 
of threads?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #288: HDDS-2506. Remove keyAllocationInfo and replication info from the auditLog

2019-12-04 Thread GitBox
anuengineer commented on issue #288: HDDS-2506. Remove keyAllocationInfo and 
replication info from the auditLog
URL: https://github.com/apache/hadoop-ozone/pull/288#issuecomment-561802917
 
 
   > +1 to add replication information
   
   Why? it is a Audit log. Why do we need Replica Info? Audit is used to find 
out who made access and when. There is a very delicate trade-off between making 
the size of Audit log larger and finding critical information when we need it. 
Typically, an HDFS cluster would produce GBs of Audit logs per day, and 
processing all that information is lots of work. The replication in my mind is 
a debug tool, plus, you can always get that info from the OM.
   
   We have a set of Queries that we support in the Audit log tool. If we are 
able to write at least one query, them I am +1 for adding this to Audit Log. In 
fact, that should be the test for adding any field to the Audit log; we cannot 
consume the field what is the point of having it in the log. 
   
   Even a far-fetched hypothetical scenario is ok, but we must have some use 
from the auditing and security point of view to add something to the audit log.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2656) Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2656:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the master 
branch.

> Prefer execute() over submit() if the returned Future is ignored
> 
>
> Key: HDDS-2656
> URL: https://issues.apache.org/jira/browse/HDDS-2656
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{Future submit(Runnable)}} and {{void execute(Runnable)}} in 
> {{ExecutorService}} have the same result.  If the returned {{Future}} is 
> ignored, {{execute}} can be used instead of {{submit}} to avoid creating some 
> objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #283: HDDS-2651 Make startId parameter non-mandatory while listing containe…

2019-12-04 Thread GitBox
xiaoyuyao commented on a change in pull request #283: HDDS-2651 Make startId 
parameter non-mandatory while listing containe…
URL: https://github.com/apache/hadoop-ozone/pull/283#discussion_r353936008
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 ##
 @@ -217,7 +217,7 @@ public ContainerInfo getContainer(final ContainerID 
containerID)
   Collections.sort(containersIds);
 
   return containersIds.stream()
-  .filter(id -> id.getId() > startId)
+  .filter(id -> id.getId() >= startId)
 
 Review comment:
   Agree, +1 to change the startId to 0 for the scmcli list container command. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #302: HDDS-2656. Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread GitBox
anuengineer merged pull request #302: HDDS-2656. Prefer execute() over submit() 
if the returned Future is ignored
URL: https://github.com/apache/hadoop-ozone/pull/302
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2571) Handle InterruptedException in SCMPipelineManager

2019-12-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2571.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks for the contribution. I have committed this to the master.

> Handle InterruptedException in SCMPipelineManager
> -
>
> Key: HDDS-2571
> URL: https://issues.apache.org/jira/browse/HDDS-2571
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #295: HDDS-2571. Handle InterruptedException in SCMPipelineManager

2019-12-04 Thread GitBox
anuengineer commented on issue #295: HDDS-2571. Handle InterruptedException in 
SCMPipelineManager
URL: https://github.com/apache/hadoop-ozone/pull/295#issuecomment-561799441
 
 
   @cxorm  Thanks for the review. I have committed this patch to the master 
branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #295: HDDS-2571. Handle InterruptedException in SCMPipelineManager

2019-12-04 Thread GitBox
anuengineer merged pull request #295: HDDS-2571. Handle InterruptedException in 
SCMPipelineManager
URL: https://github.com/apache/hadoop-ozone/pull/295
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on issue #295: HDDS-2571. Handle InterruptedException in SCMPipelineManager

2019-12-04 Thread GitBox
anuengineer commented on issue #295: HDDS-2571. Handle InterruptedException in 
SCMPipelineManager
URL: https://github.com/apache/hadoop-ozone/pull/295#issuecomment-561799047
 
 
   The acceptance test failure does not seem to be related to this patch. I am 
going to commit this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1722) Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on startup

2019-12-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1722.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thanks for the contribution. I have committed this patch to the master branch.

> Use the bindings in ReconSchemaGenerationModule to create Recon SQL tables on 
> startup
> -
>
> Key: HDDS-1722
> URL: https://issues.apache.org/jira/browse/HDDS-1722
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the table creation is done for each schema definition one by one. 
> Setup sqlite DB and create Recon SQL tables.
> cc [~vivekratnavel], [~swagle]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #294: HDDS-1722. Use the bindings in ReconSchemaGenerationModule to create …

2019-12-04 Thread GitBox
anuengineer merged pull request #294: HDDS-1722. Use the bindings in 
ReconSchemaGenerationModule to create …
URL: https://github.com/apache/hadoop-ozone/pull/294
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2637) Handle LeaderNot ready exception in OzoneManager StateMachine and upgrade ratis to latest version.

2019-12-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2637:
-
Fix Version/s: 0.5.0

> Handle LeaderNot ready exception in OzoneManager StateMachine and upgrade 
> ratis to latest version.
> --
>
> Key: HDDS-2637
> URL: https://issues.apache.org/jira/browse/HDDS-2637
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to handle LeaderNotReadyException in OM and also update to 
> latest ratis version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2620) Fix listMultipartupload API

2019-12-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2620:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix listMultipartupload API
> ---
>
> Key: HDDS-2620
> URL: https://issues.apache.org/jira/browse/HDDS-2620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listMultiparts API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listParts of a MPU key, it should use 
> both in-memory cache and rocksdb mpu table to list parts of a mpu key.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #263: HDDS-2620. Fix listMultipartupload API.

2019-12-04 Thread GitBox
bharatviswa504 commented on issue #263: HDDS-2620. Fix listMultipartupload API.
URL: https://github.com/apache/hadoop-ozone/pull/263#issuecomment-561791216
 
 
   Thank You @elek and @smengcl for the review. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #263: HDDS-2620. Fix listMultipartupload API.

2019-12-04 Thread GitBox
bharatviswa504 merged pull request #263: HDDS-2620. Fix listMultipartupload API.
URL: https://github.com/apache/hadoop-ozone/pull/263
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #263: HDDS-2620. Fix listMultipartupload API.

2019-12-04 Thread GitBox
smengcl commented on issue #263: HDDS-2620. Fix listMultipartupload API.
URL: https://github.com/apache/hadoop-ozone/pull/263#issuecomment-561790466
 
 
   lgtm +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353919018
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   > make sure that we are not breaking any functionality in the old code path
   
   For old code path for write requests which is no more used, I think it is 
okay to not to use these API's and use an API to create a key in DB by adding a 
new API in TestOmRequestUtils. As this is a newly added test, it is okay to do 
this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353919018
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   `make sure that we are not breaking any functionality in the old code path`
   For old code path for write requests which is no more used, I think it is 
okay to not to use these API's and use an API to create a key in DB by adding a 
new API in TestOmRequestUtils. As this is a newly added test, it is okay to do 
this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #205: HDDS-2386. Implement incremental ChunkBuffer.

2019-12-04 Thread GitBox
xiaoyuyao merged pull request #205: HDDS-2386. Implement incremental 
ChunkBuffer.
URL: https://github.com/apache/hadoop-ozone/pull/205
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #205: HDDS-2386. Implement incremental ChunkBuffer.

2019-12-04 Thread GitBox
xiaoyuyao commented on issue #205: HDDS-2386. Implement incremental ChunkBuffer.
URL: https://github.com/apache/hadoop-ozone/pull/205#issuecomment-561784881
 
 
   Thanks @szetszwo  for the update. +1, I will merge it shortly. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353918059
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   > even the read path in old code will also be removed
   
We have not done any work for that, Read requests are using old code path. 
I am not sure there is any plan for rewrite read requests, as they can be used 
as-is for OM HA/non-HA


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
nandakumar131 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353917339
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
 
 Review comment:
   Thanks for the catch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on issue #117: HDDS-1576. Support configure more than one raft log storage to host m…

2019-12-04 Thread GitBox
xiaoyuyao commented on issue #117: HDDS-1576. Support configure more than one 
raft log storage to host m…
URL: https://github.com/apache/hadoop-ozone/pull/117#issuecomment-561783809
 
 
   Can you rebase the PR, @ChenSammi ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353914642
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   I think it will be better to add a similar API to create Key in 
TestOMRequestUtils, which will be helpful in future tests. (As right now there 
is no API to add the key to DB which takes OmKeyInfo as the argument.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
nandakumar131 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353914413
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   Agreed, even the read path in old code will also be removed, this change is 
just to make sure that we are not breaking any functionality in the old code 
path. Both the KeyManagerImpl and the corresponding test class will be removed 
eventually.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
bharatviswa504 commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353910353
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnFour.getUuid())
+.setNodes(Arrays.asList(dnFour, dnFive, dnSix))
+.build();
+
+final ContainerBlockID blockID = new ContainerBlockID(1L, 1L);
+
+final List blocks = new ArrayList<>();
+final AllocatedBlock block = new AllocatedBlock.Builder()
+.setPipeline(pipelineOne)
+.setContainerBlockID(blockID)
+.build();
+blocks.add(block);
+
+Mockito.when(blockClient.allocateBlock(Mockito.anyLong(), Mockito.anyInt(),
+Mockito.any(ReplicationType.class),
+Mockito.any(HddsProtos.ReplicationFactor.class),
+Mockito.anyString(), Mockito.any(ExcludeList.class)))
+.thenReturn(blocks);
+
+Mockito.when(containerClient.getContainerWithPipeline(1L))
+.thenReturn(new ContainerWithPipeline(null, pipelineTwo));
+
+final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
+.setVolume("volumeOne")
+.setAdminName("admin")
+.setOwnerName("admin")
+.build();
+TestOMRequestUtils.addVolumeToOM(metadataManager, volumeArgs);
+
+final OmBucketInfo bucketInfo = OmBucketInfo.newBuilder()
+  .setVolumeName("volumeOne")
+  .setBucketName("bucketOne")
+  .build();
+TestOMRequestUtils.addBucketToOM(metadataManager, bucketInfo);
+
+final OmKeyArgs.Builder keyArgs = new OmKeyArgs.Builder()
+.setVolumeName("volumeOne")
+.setBucketName("bucketOne")
+.setKeyName("keyOne")
+.setFactor(ReplicationFactor.THREE)
+.setAcls(Collections.emptyList());
+
+final OpenKeySession session = manager.createFile(keyArgs.build(),
+false, false);
 
 Review comment:
   Write requests in old code will be soon removed. Can we use similar API's 
which are used to create bucket/volume or directly use OmMetadataManager 
keyTable put to add key?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #301: HDDS-2636. Refresh pipeline information in OzoneManager lookupFile call.

2019-12-04 Thread GitBox
avijayanhwx commented on a change in pull request #301: HDDS-2636. Refresh 
pipeline information in OzoneManager lookupFile call.
URL: https://github.com/apache/hadoop-ozone/pull/301#discussion_r353904813
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -177,4 +194,128 @@ private OmMultipartInfo 
initMultipartUpload(KeyManagerImpl omtest,
 .build();
 return omtest.initiateMultipartUpload(key1);
   }
+
+  @Test
+  public void testLookupFileWithDnFailure() throws IOException {
+final ScmBlockLocationProtocol blockClient =
+Mockito.mock(ScmBlockLocationProtocol.class);
+final StorageContainerLocationProtocol containerClient =
+Mockito.mock(StorageContainerLocationProtocol.class);
+final KeyManager manager = new KeyManagerImpl(null,
+new ScmClient(blockClient, containerClient),
+metadataManager, configuration, "test-om",
+Mockito.mock(OzoneBlockTokenSecretManager.class), null, null);
+
+final DatanodeDetails dnOne = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnTwo = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnThree = TestUtils.randomDatanodeDetails();
+
+final DatanodeDetails dnFour = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnFive = TestUtils.randomDatanodeDetails();
+final DatanodeDetails dnSix = TestUtils.randomDatanodeDetails();
+
+final PipelineID pipelineOneId = PipelineID.randomId();
+final Pipeline pipelineOne = Pipeline.newBuilder()
+.setId(pipelineOneId)
+.setType(ReplicationType.RATIS)
+.setFactor(ReplicationFactor.THREE)
+.setState(Pipeline.PipelineState.OPEN)
+.setLeaderId(dnOne.getUuid())
+.setNodes(Arrays.asList(dnOne, dnTwo, dnThree))
+.build();
+
+final PipelineID pipelineTwoId = PipelineID.randomId();
+final Pipeline pipelineTwo = Pipeline.newBuilder()
+.setId(pipelineOneId)
 
 Review comment:
   Should this be pipelineTwoId?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #300: HDDS-2662. Update gRPC and datanode protobuf version in Ozone.

2019-12-04 Thread GitBox
anuengineer commented on a change in pull request #300: HDDS-2662. Update gRPC 
and datanode protobuf version in Ozone.
URL: https://github.com/apache/hadoop-ozone/pull/300#discussion_r353903316
 
 

 ##
 File path: pom.xml
 ##
 @@ -174,8 +174,8 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 
 
 0.5.1
-3.5.0
-1.10.0
+
3.10.0
 
 Review comment:
   is this version that is used in Hadoop trunk?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2443) Python client/interface for Ozone

2019-12-04 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988050#comment-16988050
 ] 

Marton Elek commented on HDDS-2443:
---

> Would the repo be branch of apache/hadoop-ozone if we move it to the apache 
> repository?

 

Yes, I think that would be the easiest way to do. If we would like to merge it 
any time in the future, it should be evolved under apache.

 

 

> Python client/interface for Ozone
> -
>
> Key: HDDS-2443
> URL: https://issues.apache.org/jira/browse/HDDS-2443
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Client
>Reporter: Li Cheng
>Priority: Major
> Attachments: OzoneS3.py
>
>
> Original ideas: item#25 in 
> [https://cwiki.apache.org/confluence/display/HADOOP/Ozone+project+ideas+for+new+contributors]
> Ozone Client(Python) for Data Science Notebook such as Jupyter.
>  # Size: Large
>  # PyArrow: [https://pypi.org/project/pyarrow/]
>  # Python -> libhdfs HDFS JNI library (HDFS, S3,...) -> Java client API 
> Impala uses  libhdfs
>  
> Path to try:
> # s3 interface: Ozone s3 gateway(already supported) + AWS python client 
> (boto3)
> # python native RPC
> # pyarrow + libhdfs, which use the Java client under the hood.
> # python + C interface of go / rust ozone library. I created POC go / rust 
> clients earlier which can be improved if the libhdfs interface is not good 
> enough. [By [~elek]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2656) Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2656:
---
Status: Patch Available  (was: In Progress)

> Prefer execute() over submit() if the returned Future is ignored
> 
>
> Key: HDDS-2656
> URL: https://issues.apache.org/jira/browse/HDDS-2656
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{Future submit(Runnable)}} and {{void execute(Runnable)}} in 
> {{ExecutorService}} have the same result.  If the returned {{Future}} is 
> ignored, {{execute}} can be used instead of {{submit}} to avoid creating some 
> objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2656) Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2656:
-
Labels: pull-request-available  (was: )

> Prefer execute() over submit() if the returned Future is ignored
> 
>
> Key: HDDS-2656
> URL: https://issues.apache.org/jira/browse/HDDS-2656
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>
> {{Future submit(Runnable)}} and {{void execute(Runnable)}} in 
> {{ExecutorService}} have the same result.  If the returned {{Future}} is 
> ignored, {{execute}} can be used instead of {{submit}} to avoid creating some 
> objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #302: HDDS-2656. Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread GitBox
adoroszlai opened a new pull request #302: HDDS-2656. Prefer execute() over 
submit() if the returned Future is ignored
URL: https://github.com/apache/hadoop-ozone/pull/302
 
 
   ## What changes were proposed in this pull request?
   
   `Future submit(Runnable)` and `void execute(Runnable)` in `ExecutorService` 
have the same result.  If the returned `Future` is ignored, `execute` can be 
used instead of `submit` to avoid creating some objects.
   
   https://issues.apache.org/jira/browse/HDDS-2656
   
   ## How was this patch tested?
   
   https://github.com/adoroszlai/hadoop-ozone/runs/333081135


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #299: HDDS-2071. Support filters in ozone insight point

2019-12-04 Thread GitBox
adoroszlai commented on a change in pull request #299: HDDS-2071. Support 
filters in ozone insight point
URL: https://github.com/apache/hadoop-ozone/pull/299#discussion_r353874274
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/datanode/RatisInsight.java
 ##
 @@ -39,31 +39,43 @@
  */
 public class RatisInsight extends BaseInsightPoint implements InsightPoint {
 
+  public static final String PIPLINE_FILTER = "pipeline";
   private OzoneConfiguration conf;
 
   public RatisInsight(OzoneConfiguration conf) {
 this.conf = conf;
   }
 
   @Override
-  public List getRelatedLoggers(boolean verbose) {
+  public List getRelatedLoggers(boolean verbose,
+  Map filters) {
+if (filters == null || !filters.containsKey(PIPLINE_FILTER)) {
+  throw new IllegalArgumentException(PIPLINE_FILTER
+  + " filter should be specified (-f pipline= result = new ArrayList<>();
-try {
-  Optional pipeline;
-  try (ScmClient scmClient = createScmClient(conf)) {
-pipeline = scmClient.listPipelines()
-.stream()
-.filter(d -> d.getNodes().size() > 1)
-.findFirst();
+
+try (ScmClient scmClient = createScmClient(conf)) {
+  Optional pipelineSelection = scmClient.listPipelines()
+  .stream()
+  .filter(d -> d.getNodes().size() > 1)
 
 Review comment:
   Do we need the size filter, given that pipeline ID is required?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #299: HDDS-2071. Support filters in ozone insight point

2019-12-04 Thread GitBox
adoroszlai commented on a change in pull request #299: HDDS-2071. Support 
filters in ozone insight point
URL: https://github.com/apache/hadoop-ozone/pull/299#discussion_r353860671
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/datanode/RatisInsight.java
 ##
 @@ -39,31 +39,43 @@
  */
 public class RatisInsight extends BaseInsightPoint implements InsightPoint {
 
+  public static final String PIPLINE_FILTER = "pipeline";
   private OzoneConfiguration conf;
 
   public RatisInsight(OzoneConfiguration conf) {
 this.conf = conf;
   }
 
   @Override
-  public List getRelatedLoggers(boolean verbose) {
+  public List getRelatedLoggers(boolean verbose,
+  Map filters) {
+if (filters == null || !filters.containsKey(PIPLINE_FILTER)) {
+  throw new IllegalArgumentException(PIPLINE_FILTER
+  + " filter should be specified (-f pipline=

[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #299: HDDS-2071. Support filters in ozone insight point

2019-12-04 Thread GitBox
adoroszlai commented on a change in pull request #299: HDDS-2071. Support 
filters in ozone insight point
URL: https://github.com/apache/hadoop-ozone/pull/299#discussion_r353861506
 
 

 ##
 File path: 
hadoop-ozone/insight/src/main/java/org/apache/hadoop/ozone/insight/datanode/RatisInsight.java
 ##
 @@ -39,31 +39,43 @@
  */
 public class RatisInsight extends BaseInsightPoint implements InsightPoint {
 
+  public static final String PIPLINE_FILTER = "pipeline";
 
 Review comment:
   ```suggestion
 public static final String PIPELINE_FILTER = "pipeline";
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >