[jira] [Assigned] (HDDS-2608) Provide command to wait until SCM is out from the safe-mode

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2608:
--

Assignee: Marton Elek

> Provide command to wait until SCM is out from the safe-mode
> ---
>
> Key: HDDS-2608
> URL: https://issues.apache.org/jira/browse/HDDS-2608
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The safe mode can be checked with "ozone scmcli safemode status". But for 
> acceptance tests there is no easy way to check if the cluster is ready to 
> execute the tests (See HDDS-2606 for example).
> One easy solution is to create a polling version from "safemode status".
> "safemode wait --timeout ..." can be blocked until the scm is out from the 
> safe mode.
> Wit proper safe mode rules (min datanodes + min pipline numbers) it can help 
> us to check if the acceptance tests are ready to test.
> Same command can be used in k8s as well to test if the cluster is ready to 
> start the freon commands...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2609) Move plain unit tests out of integration tests

2019-11-21 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2609:
--

 Summary: Move plain unit tests out of integration tests
 Key: HDDS-2609
 URL: https://issues.apache.org/jira/browse/HDDS-2609
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Some test classes in {{integration-test}} are actually unit tests: they do not 
start a mini cluster, nor even multiple components.  These can be moved to the 
subprojects they belong to (eg. {{container-service}}.  The benefit is that it 
will be easier to spot if they are broken, since integration tests are executed 
less frequently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2605) Use LongSupplier to avoid boxing

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2605:
---
Status: Patch Available  (was: In Progress)

> Use LongSupplier to avoid boxing
> 
>
> Key: HDDS-2605
> URL: https://issues.apache.org/jira/browse/HDDS-2605
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Freon's {{ProgressBar}} uses {{Supplier}}, which could be replaced with 
> {{LongSupplier}} to avoid boxing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2603) Avoid unnecessary boxing in XceiverClientReply

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2603:
---
Status: Patch Available  (was: In Progress)

> Avoid unnecessary boxing in XceiverClientReply
> --
>
> Key: HDDS-2603
> URL: https://issues.apache.org/jira/browse/HDDS-2603
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{XceiverClientReply#logIndex}} is unnecessarily boxed/unboxed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2660) Create insight point for datanode container protocol

2019-12-04 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2660:
--

 Summary: Create insight point for datanode container protocol
 Key: HDDS-2660
 URL: https://issues.apache.org/jira/browse/HDDS-2660
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


The goal of this task is to create a new insight point for the datanode 
container protocol ({{HddsDispatcher}}) to be able to debug 
{{client<->datanode}} communication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2506) Remove keyAllocationInfo and replication info from the auditLog

2019-12-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2506:
---
Status: Patch Available  (was: In Progress)

> Remove keyAllocationInfo and replication info from the auditLog
> ---
>
> Key: HDDS-2506
> URL: https://issues.apache.org/jira/browse/HDDS-2506
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During the review of HDDS-2470 I found that the full keyLocationInfo is added 
> to the audit log for s3 operations:
>  
> {code:java}
> 2019-11-15 12:34:18,538 | INFO  | OMAudit | user=hadoop | ip=192.168.16.2 | 
> op=ALLOCATE_KEY {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, 
> key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[]} | ret=SUCCESS |  2019-11-15 12:34:20,576 | INFO  | 
> OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_KEY 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[]} | ret=SUCCESS |  2019-11-15 12:34:20,626 | INFO  | 
> OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_BLOCK 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=THREE, 
> keyLocationInfo=[], clientID=103141950132977668} | ret=SUCCESS |  2019-11-15 
> 12:34:51,705 | INFO  | OMAudit | user=hadoop | ip=192.168.16.2 | 
> op=COMMIT_MULTIPART_UPLOAD_PARTKEY 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[blockID {  containerBlockID {containerID: 1localID: 
> 103141950135009280  }  blockCommitSequenceId: 2}offset: 0length: 
> 3813createVersion: 0pipeline {  members {uuid: 
> "eefe54e8-5723-458e-9204-207c6b97c9b3"ipAddress: "192.168.16.3"
> hostName: "ozones3_datanode_1.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859} 
>networkName: "eefe54e8-5723-458e-9204-207c6b97c9b3"networkLocation: 
> "/default-rack"  }  members {uuid: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743" 
>ipAddress: "192.168.16.7"hostName: 
> "ozones3_datanode_2.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859}
> networkName: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743"networkLocation: 
> "/default-rack"  }  members {uuid: "9979c326-4982-4a4c-b34e-e70c1d825f5f" 
>ipAddress: "192.168.16.6"hostName: 
> "ozones3_datanode_3.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859}
> networkName: "9979c326-4982-4a4c-b34e-e70c1d825f5f"networkLocation: 
> "/default-rack"  }  state: PIPELINE_OPEN  type: RATIS  factor: THREE  id {
> id: "69ba305b-fe89-4f5c-97cd-b894d5ee8f2b"  }  leaderID: ""}], partNumber=1, 
> partName=/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668} 
> | ret=SUCCESS |  2019-11-15 12:42:10,883 | INFO  | OMAudit | user=hadoop | 
> ip=192.168.16.2 | op=COMPLETE_MULTIPART_UPLOAD 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=0, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[], 
> multipartList=[partNumber: 1partName: 
> "/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668"]} | 
> ret=SUCCESS |  
>  {code}
> Including the full keyLocation info in the audit log may cause some problems:
>  * It makes the the audit log slower
>  * It makes harder to parse the audit log
> I think it's better to separate the debug log (which can be provided easily 
> with ozone insight tool) from the audit log. Therefore I suggest to remove 
> the keyLocationInfo, replicationType, replicationFactor from the aduit log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2658) Insight log level reset does not work

2019-12-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2658:
---
Status: Patch Available  (was: In Progress)

> Insight log level reset does not work
> -
>
> Key: HDDS-2658
> URL: https://issues.apache.org/jira/browse/HDDS-2658
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ozone insight log}} command changes log level to debug or trace.  After 
> streaming is stopped, it attempts to reset to info.  This does not seem to 
> work, probably because the process is abruptly stopped (Ctrl-C).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2506) Remove keyAllocationInfo and replication info from the auditLog

2019-12-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2506:
--

Assignee: Attila Doroszlai

> Remove keyAllocationInfo and replication info from the auditLog
> ---
>
> Key: HDDS-2506
> URL: https://issues.apache.org/jira/browse/HDDS-2506
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Attila Doroszlai
>Priority: Major
>
> During the review of HDDS-2470 I found that the full keyLocationInfo is added 
> to the audit log for s3 operations:
>  
> {code:java}
> 2019-11-15 12:34:18,538 | INFO  | OMAudit | user=hadoop | ip=192.168.16.2 | 
> op=ALLOCATE_KEY {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, 
> key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[]} | ret=SUCCESS |  2019-11-15 12:34:20,576 | INFO  | 
> OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_KEY 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[]} | ret=SUCCESS |  2019-11-15 12:34:20,626 | INFO  | 
> OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_BLOCK 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=THREE, 
> keyLocationInfo=[], clientID=103141950132977668} | ret=SUCCESS |  2019-11-15 
> 12:34:51,705 | INFO  | OMAudit | user=hadoop | ip=192.168.16.2 | 
> op=COMMIT_MULTIPART_UPLOAD_PARTKEY 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=3813, replicationType=RATIS, replicationFactor=ONE, 
> keyLocationInfo=[blockID {  containerBlockID {containerID: 1localID: 
> 103141950135009280  }  blockCommitSequenceId: 2}offset: 0length: 
> 3813createVersion: 0pipeline {  members {uuid: 
> "eefe54e8-5723-458e-9204-207c6b97c9b3"ipAddress: "192.168.16.3"
> hostName: "ozones3_datanode_1.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859} 
>networkName: "eefe54e8-5723-458e-9204-207c6b97c9b3"networkLocation: 
> "/default-rack"  }  members {uuid: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743" 
>ipAddress: "192.168.16.7"hostName: 
> "ozones3_datanode_2.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859}
> networkName: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743"networkLocation: 
> "/default-rack"  }  members {uuid: "9979c326-4982-4a4c-b34e-e70c1d825f5f" 
>ipAddress: "192.168.16.6"hostName: 
> "ozones3_datanode_3.ozones3_default"ports {  name: "RATIS"  
> value: 9858}ports {  name: "STANDALONE"  value: 9859}
> networkName: "9979c326-4982-4a4c-b34e-e70c1d825f5f"networkLocation: 
> "/default-rack"  }  state: PIPELINE_OPEN  type: RATIS  factor: THREE  id {
> id: "69ba305b-fe89-4f5c-97cd-b894d5ee8f2b"  }  leaderID: ""}], partNumber=1, 
> partName=/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668} 
> | ret=SUCCESS |  2019-11-15 12:42:10,883 | INFO  | OMAudit | user=hadoop | 
> ip=192.168.16.2 | op=COMPLETE_MULTIPART_UPLOAD 
> {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, 
> dataSize=0, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[], 
> multipartList=[partNumber: 1partName: 
> "/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668"]} | 
> ret=SUCCESS |  
>  {code}
> Including the full keyLocation info in the audit log may cause some problems:
>  * It makes the the audit log slower
>  * It makes harder to parse the audit log
> I think it's better to separate the debug log (which can be provided easily 
> with ozone insight tool) from the audit log. Therefore I suggest to remove 
> the keyLocationInfo, replicationType, replicationFactor from the aduit log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2658) Insight log level reset does not work

2019-12-03 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2658:
--

 Summary: Insight log level reset does not work
 Key: HDDS-2658
 URL: https://issues.apache.org/jira/browse/HDDS-2658
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Tools
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ozone insight log}} command changes log level to debug or trace.  After 
streaming is stopped, it attempts to reset to info.  This does not seem to 
work, probably because the process is abruptly stopped (Ctrl-C).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2646) Start acceptance tests only if at least one THREE pipeline is available

2019-12-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2646:
---
Labels: pull-request-available  (was: )

> Start acceptance tests only if at least one THREE pipeline is available
> ---
>
> Key: HDDS-2646
> URL: https://issues.apache.org/jira/browse/HDDS-2646
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: docker-ozoneperf-ozoneperf-basic-scm.log
>
>
> After HDDS-2034 (or even before?) pipeline creation (or the status transition 
> from ALLOCATE to OPEN) requires at least one pipeline report from all of the 
> datanodes. Which means that the cluster might not be usable even if it's out 
> from the safe mode AND there are at least three datanodes.
> It makes all the acceptance tests unstable.
> For example in 
> [this|https://github.com/apache/hadoop-ozone/pull/263/checks?check_run_id=324489319]
>  run.
> {code:java}
> scm_1 | 2019-11-28 11:22:54,401 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode 548f146f-2166-440a-b9f1-83086591ae26
> scm_1 | 2019-11-28 11:22:54,402 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c
> scm_1 | 2019-11-28 11:22:54,404 INFO pipeline.RatisPipelineProvider: 
> Send pipeline:PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb create command 
> to datanode 47dbb8e4-bbde-4164-a798-e47e8c696fb5
> scm_1 | 2019-11-28 11:22:54,405 INFO pipeline.PipelineStateManager: 
> Created pipeline Pipeline[ Id: 8dc4aeb6-5ae2-46a0-948d-287c97dd81fb, Nodes: 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}47dbb8e4-bbde-4164-a798-e47e8c696fb5{ip: 172.24.0.2, host: 
> ozoneperf_datanode_2.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}, Type:RATIS, Factor:THREE, State:ALLOCATED]
> scm_1 | 2019-11-28 11:22:56,975 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:22:58,018 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:01,871 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:02,817 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> 548f146f-2166-440a-b9f1-83086591ae26{ip: 172.24.0.10, host: 
> ozoneperf_datanode_3.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null}
> scm_1 | 2019-11-28 11:23:02,847 INFO pipeline.PipelineReportHandler: 
> Pipeline THREE PipelineID=8dc4aeb6-5ae2-46a0-948d-287c97dd81fb reported by 
> dccee7c4-19b3-41b8-a3f7-b47b0ed45f6c{ip: 172.24.0.5, host: 
> ozoneperf_datanode_1.ozoneperf_default, networkLocation: /default-rack, 
> certSerialId: null} {code}
> As you can see the pipeline is created but the the cluster is not usable as 
> it's not yet reporter back by datanode_2:
> {code:java}
> scm_1 | 2019-11-28 11:23:13,879 WARN block.BlockManagerImpl: Pipeline 
> creation failed for type:RATIS factor:THREE. Retrying get pipelines c
> all once.
> scm_1 | 
> org.apache.hadoop.hdds.scm.pipeline.InsufficientDatanodesException: Cannot 
> create pipeline of factor 3 using 0 nodes.{code}
>  The quick fix is to configure all the compose clusters to wait until one 
> pipeline is available. This can be done by adjusting the number of the 
> required datanodes:
> {code:java}
> // We only care about THREE replica pipeline
> int minHealthyPipelines = minDatanodes /
> HddsProtos.ReplicationFactor.THREE_VALUE; {code}
>  



--
This message was sent by Atlassian 

[jira] [Updated] (HDDS-2656) Prefer execute() over submit() if the returned Future is ignored

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2656:
---
Status: Patch Available  (was: In Progress)

> Prefer execute() over submit() if the returned Future is ignored
> 
>
> Key: HDDS-2656
> URL: https://issues.apache.org/jira/browse/HDDS-2656
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{Future submit(Runnable)}} and {{void execute(Runnable)}} in 
> {{ExecutorService}} have the same result.  If the returned {{Future}} is 
> ignored, {{execute}} can be used instead of {{submit}} to avoid creating some 
> objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2661) Use field based Config annotation instead of method based

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2661:
--

Assignee: Marton Elek

> Use field based Config annotation instead of method based
> -
>
> Key: HDDS-2661
> URL: https://issues.apache.org/jira/browse/HDDS-2661
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-2413 proposes an additional usage of the @Config annotation: to set 
> configuration based on an existing configuration class.
> But as of now we annotate the setters instead of the fields. To avoid 
> annotation duplication (we need to read the values from the getters or the 
> fields) I propose to switch to use field based annotations instead of setter 
> based annotation.
> I think it's more readable and additional validation (even the class level 
> validations) can be done in a @PostConstruct method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2693) HddsVolume mixes ChunkLayOutVersion and DataNodeLayoutVersion

2019-12-09 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2693:
--

 Summary: HddsVolume mixes ChunkLayOutVersion and 
DataNodeLayoutVersion
 Key: HDDS-2693
 URL: https://issues.apache.org/jira/browse/HDDS-2693
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{HddsVolume}} [initializes {{layoutVersion}} using latest 
{{ChunkLayOutVersion}}|https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java#L268].
  But when writing the same info to file, it [verifies {{layoutVersion}} 
matches the latest 
{{DataNodeLayoutVersion}}|https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java#L292-L293].
  To satisfy the condition {{ChunkLayOutVersion}} and {{DataNodeLayoutVersion}} 
have to be in sync, which means only one of them is necessary.  I think the 
intention was to use {{DataNodeLayoutVersion}} in both cases, as 
{{ChunkLayOutVersion}} is for key-value container internal structure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2694) HddsVolume#readVersionFile fails when reading older versions

2019-12-09 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2694:
--

 Summary: HddsVolume#readVersionFile fails when reading older 
versions
 Key: HDDS-2694
 URL: https://issues.apache.org/jira/browse/HDDS-2694
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Attila Doroszlai


{{HddsVolume#layoutVersion}} is a version number, supposed to be used for 
handling upgrades from older versions.  Currently only one version is defined.  
But should a new version be introduced, HddsVolume would fail to read older 
version file.  This is caused by a check in {{HddsVolumeUtil}} that only 
considers the latest version as valid:

{code:title=https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/utils/HddsVolumeUtil.java#L137-L153}
  /**
   * Returns layOutVersion if it is valid. Throws an exception otherwise.
   */
  @VisibleForTesting
  public static int getLayOutVersion(Properties props, File versionFile) throws
  InconsistentStorageStateException {
String lvStr = getProperty(props, OzoneConsts.LAYOUTVERSION, versionFile);

int lv = Integer.parseInt(lvStr);
if(DataNodeLayoutVersion.getLatestVersion().getVersion() != lv) {
  throw new InconsistentStorageStateException("Invalid layOutVersion. " +
  "Version file has layOutVersion as " + lv + " and latest Datanode " +
  "layOutVersion is " +
  DataNodeLayoutVersion.getLatestVersion().getVersion());
}
return lv;
  }
{code}

I think this should check whether the version number identifies a known 
{{DataNodeLayoutVersion}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2693) HddsVolume mixes ChunkLayOutVersion and DataNodeLayoutVersion

2019-12-09 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2693:
---
Status: Patch Available  (was: In Progress)

> HddsVolume mixes ChunkLayOutVersion and DataNodeLayoutVersion
> -
>
> Key: HDDS-2693
> URL: https://issues.apache.org/jira/browse/HDDS-2693
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HddsVolume}} [initializes {{layoutVersion}} using latest 
> {{ChunkLayOutVersion}}|https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java#L268].
>   But when writing the same info to file, it [verifies {{layoutVersion}} 
> matches the latest 
> {{DataNodeLayoutVersion}}|https://github.com/apache/hadoop-ozone/blob/1d56bc244995e857b842f62d3d1e544ee100bbc1/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java#L292-L293].
>   To satisfy the condition {{ChunkLayOutVersion}} and 
> {{DataNodeLayoutVersion}} have to be in sync, which means only one of them is 
> necessary.  I think the intention was to use {{DataNodeLayoutVersion}} in 
> both cases, as {{ChunkLayOutVersion}} is for key-value container internal 
> structure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-12-09 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-1812:
---
Status: Patch Available  (was: In Progress)

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2722) Let ChunkManager read/write ChunkBuffer instead of ByteBuffer

2019-12-12 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2722:
---
Status: Patch Available  (was: In Progress)

> Let ChunkManager read/write ChunkBuffer instead of ByteBuffer
> -
>
> Key: HDDS-2722
> URL: https://issues.apache.org/jira/browse/HDDS-2722
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Change {{ChunkManager}} read/write methods to accept/return {{ChunkBuffer}} 
> instead of {{ByteBuffer}}.  This allows seamlessly passing multiple buffers 
> without further interface change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2508) Fix TestDeadNodeHandler

2019-12-13 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reopened HDDS-2508:


Thanks [~sodonnell] for spotting this.

> Fix TestDeadNodeHandler
> ---
>
> Key: HDDS-2508
> URL: https://issues.apache.org/jira/browse/HDDS-2508
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Priority: Major
>
> {code}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 63.647 s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
> [ERROR] testOnMessage(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  
> Time elapsed: 63.562 s  <<< ERROR!
> java.io.IOException: Could not allocate container. Cannot get any matching 
> pipeline for Type:RATIS, Factor:THREE, State:PipelineState.OPEN
>   at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:261)
>   at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.allocateContainer(SCMContainerManager.java:255)
>   at 
> org.apache.hadoop.hdds.scm.TestUtils.allocateContainer(TestUtils.java:488)
>   at 
> org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testOnMessage(TestDeadNodeHandler.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2735) Let GitHub Actions run acceptance check in parallel

2019-12-14 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2735:
--

 Summary: Let GitHub Actions run acceptance check in parallel
 Key: HDDS-2735
 URL: https://issues.apache.org/jira/browse/HDDS-2735
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: build
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Currently GitHub Actions workflows are configured to run all checks in 
parallel, except acceptance test.  The rationale is that acceptance test takes 
the most time, and there is no reason to run it if a cheaper check catches some 
problem.

I propose to let GitHub Actions run acceptance test in parallel to address the 
following concerns:

# Although acceptance test is the slowest (~60 minutes), unit test also takes 
quite some time (~20-25 minutes).  Serializing these two checks increases the 
time to get feedback on PRs and commits by ~33-40%.
# For PRs and post-commit builds in forks, running all checks regardless of the 
result of independent checks allows authors to reduce the number of rounds they 
need to address any problems.
# For post-commit builds in Apache master, we expect all checks to pass.  
However, checks sometime fail eg. due to transient network errors.  Skipping 
acceptance test due to such a problem in another check provides no benefit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2739) No need to try install awscli before each test

2019-12-14 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2739:
--

 Summary: No need to try install awscli before each test
 Key: HDDS-2739
 URL: https://issues.apache.org/jira/browse/HDDS-2739
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


S3 acceptance test attempts to install {{awscli}} prior to each test case.  It 
is enough to do so before each suite.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2735) Let GitHub Actions run acceptance check in parallel

2019-12-14 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2735:
---
Status: Patch Available  (was: In Progress)

> Let GitHub Actions run acceptance check in parallel
> ---
>
> Key: HDDS-2735
> URL: https://issues.apache.org/jira/browse/HDDS-2735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently GitHub Actions workflows are configured to run all checks in 
> parallel, except acceptance test.  The rationale is that acceptance test 
> takes the most time, and there is no reason to run it if a cheaper check 
> catches some problem.
> I propose to let GitHub Actions run acceptance test in parallel to address 
> the following concerns:
> # Although acceptance test is the slowest (~60 minutes), unit test also takes 
> quite some time (~20-25 minutes).  Serializing these two checks increases the 
> time to get feedback on PRs and commits by ~33-40%.
> # For PRs and post-commit builds in forks, running all checks regardless of 
> the result of independent checks allows authors to reduce the number of 
> rounds they need to address any problems.
> # For post-commit builds in Apache master, we expect all checks to pass.  
> However, checks sometime fail eg. due to transient network errors.  Skipping 
> acceptance test due to such a problem in another check provides no benefit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2752) Unnecessary calls to isNoneEmpty and isAllEmpty

2019-12-15 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2752:
--

 Summary: Unnecessary calls to isNoneEmpty and isAllEmpty
 Key: HDDS-2752
 URL: https://issues.apache.org/jira/browse/HDDS-2752
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{isNoneEmpty}} and {{isAllEmpty}} check variable number of strings.  For 
single string they can be replaced with {{isNotEmpty}} and {{isEmpty}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2751) Wrong number of placeholders in log message

2019-12-15 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2751:
--

 Summary: Wrong number of placeholders in log message
 Key: HDDS-2751
 URL: https://issues.apache.org/jira/browse/HDDS-2751
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Fix log messages where number of placeholders does not match the number of 
parameters provided.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2751) Wrong number of placeholders in log message

2019-12-15 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2751:
---
Status: Patch Available  (was: Open)

> Wrong number of placeholders in log message
> ---
>
> Key: HDDS-2751
> URL: https://issues.apache.org/jira/browse/HDDS-2751
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix log messages where number of placeholders does not match the number of 
> parameters provided.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2760) Intermittent timeout in TestCloseContainerEventHandler

2019-12-17 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2760:
--

 Summary: Intermittent timeout in TestCloseContainerEventHandler
 Key: HDDS-2760
 URL: https://issues.apache.org/jira/browse/HDDS-2760
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Attila Doroszlai


TestCloseContainerEventHandler depends on wall clock and fails intermittently:

{code}
2019-12-17T11:29:56.1873334Z [INFO] Running 
org.apache.hadoop.hdds.scm.container.TestCloseContainerEventHandler
2019-12-17T11:31:10.0593259Z [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
Skipped: 0, Time elapsed: 71.343 s <<< FAILURE! - in 
org.apache.hadoop.hdds.scm.container.TestCloseContainerEventHandler
2019-12-17T11:31:10.0604096Z [ERROR] 
testCloseContainerEventWithRatis(org.apache.hadoop.hdds.scm.container.TestCloseContainerEventHandler)
  Time elapsed: 66.214 s  <<< FAILURE!
2019-12-17T11:31:10.0604347Z java.lang.AssertionError: Messages are not 
processed in the given timeframe. Queued: 5 Processed: 0
2019-12-17T11:31:10.0614937Zat 
org.apache.hadoop.hdds.server.events.EventQueue.processAll(EventQueue.java:238)
2019-12-17T11:31:10.0616610Zat 
org.apache.hadoop.hdds.scm.container.TestCloseContainerEventHandler.testCloseContainerEventWithRatis(TestCloseContainerEventHandler.java:149)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2070) Create insight point to debug one specific pipeline

2019-12-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2070.

Fix Version/s: 0.5.0
   Resolution: Done

Thanks [~elek] for the contribution.  Merged it into master.

> Create insight point to debug one specific pipeline
> ---
>
> Key: HDDS-2070
> URL: https://issues.apache.org/jira/browse/HDDS-2070
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Wit the first implementation of ozone insight tool we had a demo 
> insight-point to debug Ratis pipelines. It was not stable enough to include 
> in the first patch, this patch is about fixing it.
> The goal is to implement a new insight point (eg. datanode.pipeline) which 
> can show information about one pipeline.
> It can be done with retrieving the hosts of the pipeline and generate the 
> loggers metrics (InsightPoint.getRelatedLoggers and InsightPoint.getMetrics) 
> based on the pipeline information (same loggers should be displayed from all 
> the three datanodes.
> The pipeline id can be defined as a filter parameter which (in this case) 
> should be required.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2715) Add config for client buffer increment size

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2715:
--

 Summary: Add config for client buffer increment size
 Key: HDDS-2715
 URL: https://issues.apache.org/jira/browse/HDDS-2715
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Client
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Introduce a new config property for client-side buffer increment size along the 
lines of {{ozone.client.stream.buffer.flush.size}} and 
{{ozone.client.stream.buffer.max.size}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2718) Send chunks in increments to datanode

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2718:
--

 Summary: Send chunks in increments to datanode
 Key: HDDS-2718
 URL: https://issues.apache.org/jira/browse/HDDS-2718
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM Client
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Change client to send chunks in multiple parts based on stream buffer increment 
size config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2717) Handle chunk increments in datanode

2019-12-12 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2717:
---
Issue Type: Improvement  (was: Task)

> Handle chunk increments in datanode
> ---
>
> Key: HDDS-2717
> URL: https://issues.apache.org/jira/browse/HDDS-2717
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> Let datanode handle incremental additions to chunks (data with non-zero 
> offset).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2717) Handle chunk increments in datanode

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2717:
--

 Summary: Handle chunk increments in datanode
 Key: HDDS-2717
 URL: https://issues.apache.org/jira/browse/HDDS-2717
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Let datanode handle incremental additions to chunks (data with non-zero offset).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-12-12 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2588:
---
Issue Type: Improvement  (was: Bug)

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-12-12 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2588:
---
Fix Version/s: 0.5.0

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2713) Decouple client write size and datanode chunk size

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2713:
--

 Summary: Decouple client write size and datanode chunk size
 Key: HDDS-2713
 URL: https://issues.apache.org/jira/browse/HDDS-2713
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode, SCM Client
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Currently Datanode creates chunk files as they are received from HDDS client.  
This creates a conflict between requirements: client would like to use less 
memory for buffering, but datanode needs to avoid small files.

The goal of this task is to decouple client and server write sizes to allow the 
client to send data in smaller increments without affecting datanode storage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2711) OM does not report JVM metrics

2019-12-11 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2711:
--

 Summary: OM does not report JVM metrics
 Key: HDDS-2711
 URL: https://issues.apache.org/jira/browse/HDDS-2711
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


JVM metrics are available from Datanode and SCM, but not from OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2721) Allow wrapping list of ByteBuffers with ChunkBuffer

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2721:
--

 Summary: Allow wrapping list of ByteBuffers with ChunkBuffer
 Key: HDDS-2721
 URL: https://issues.apache.org/jira/browse/HDDS-2721
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ChunkBuffer}} is a useful abstraction over {{ByteBuffer}} to hide whether 
it's backed by a single {{ByteBuffer}} or multiple ones 
({{IncrementalChunkBuffer}}).  However, {{IncrementalChunkBuffer}} allocates 
its own {{ByteBuffer}} instances and only works with uniform buffer sizes.  The 
goal of this task is to allow wrapping an existing  {{List}} in 
{{ChunkBuffer}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2722) Let ChunkManager read/write ChunkBuffer instead of ByteBuffer

2019-12-12 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2722:
--

 Summary: Let ChunkManager read/write ChunkBuffer instead of 
ByteBuffer
 Key: HDDS-2722
 URL: https://issues.apache.org/jira/browse/HDDS-2722
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Change {{ChunkManager}} read/write methods to accept/return {{ChunkBuffer}} 
instead of {{ByteBuffer}}.  This allows seamlessly passing multiple buffers 
without further interface change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2677) Acceptance test may fail despite success status

2019-12-05 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2677:
--

 Summary: Acceptance test may fail despite success status
 Key: HDDS-2677
 URL: https://issues.apache.org/jira/browse/HDDS-2677
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: freon
Reporter: Attila Doroszlai


Found this in a local acceptance test run:

{code}
Start freon testing   | FAIL |
'2019-12-05 12:25:24,744 [main] INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
2019-12-05 12:25:24,934 [main] INFO impl.MetricsSystemImpl: Scheduled Metric 
snapshot period at 10 second(s).
2019-12-05 12:25:24,935 [main] INFO impl.MetricsSystemImpl: ozone-freon metrics 
system started
2019-12-05 12:25:26,690 [main] INFO freon.RandomKeyGenerator: Number of 
Threads: 1
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of 
Volumes: 5.
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of Buckets 
per Volume: 5.
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of Keys 
per Bucket: 5.
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Key size: 10240 
bytes
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Buffer size: 4096 
bytes
2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: validateWrites : 
false
[ Message content over the limit has been removed. ]
util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

***
Status: Success
Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
Number of Volumes created: 5
Number of Buckets created: 25
Number of Keys added: 125
Ratis replication factor: THREE
Ratis replication type: RATIS
Average Time spent in volume creation: 00:00:00,210
Average Time spent in bucket creation: 00:00:00,213
Average Time spent in key creation: 00:00:37,506
Average Time spent in key write: 00:01:42,157
Total bytes written: 128
Total Execution time: 00:02:31,516
***' contains 'ERROR'
{code}

Need to check if {{Status: Success}} is true (ie. if keys were indeed 
successfully created), and if so, {{ERROR}} in output should make the test 
failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2678) Add thread name to log pattern

2019-12-05 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2678:
--

 Summary: Add thread name to log pattern
 Key: HDDS-2678
 URL: https://issues.apache.org/jira/browse/HDDS-2678
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Ozone's default log4j patterns should include thread name, as it helps a bit in 
understanding events.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2677) Acceptance test may fail despite success status

2019-12-05 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2677:
---
Component/s: test

> Acceptance test may fail despite success status
> ---
>
> Key: HDDS-2677
> URL: https://issues.apache.org/jira/browse/HDDS-2677
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: freon, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> Found this in a local acceptance test run:
> {code}
> Start freon testing   | FAIL |
> '2019-12-05 12:25:24,744 [main] INFO impl.MetricsConfig: Loaded properties 
> from hadoop-metrics2.properties
> 2019-12-05 12:25:24,934 [main] INFO impl.MetricsSystemImpl: Scheduled Metric 
> snapshot period at 10 second(s).
> 2019-12-05 12:25:24,935 [main] INFO impl.MetricsSystemImpl: ozone-freon 
> metrics system started
> 2019-12-05 12:25:26,690 [main] INFO freon.RandomKeyGenerator: Number of 
> Threads: 1
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of 
> Volumes: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of 
> Buckets per Volume: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of Keys 
> per Bucket: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Key size: 10240 
> bytes
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Buffer size: 
> 4096 bytes
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: validateWrites 
> : false
> [ Message content over the limit has been removed. ]
> util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 5
> Number of Buckets created: 25
> Number of Keys added: 125
> Ratis replication factor: THREE
> Ratis replication type: RATIS
> Average Time spent in volume creation: 00:00:00,210
> Average Time spent in bucket creation: 00:00:00,213
> Average Time spent in key creation: 00:00:37,506
> Average Time spent in key write: 00:01:42,157
> Total bytes written: 128
> Total Execution time: 00:02:31,516
> ***' contains 'ERROR'
> {code}
> Need to check if {{Status: Success}} is true (ie. if keys were indeed 
> successfully created), and if so, {{ERROR}} in output should make not the 
> test failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-12-05 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2588:
---
Status: In Progress  (was: Patch Available)

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2677) Acceptance test may fail despite success status

2019-12-05 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2677:
---
Status: Patch Available  (was: In Progress)

> Acceptance test may fail despite success status
> ---
>
> Key: HDDS-2677
> URL: https://issues.apache.org/jira/browse/HDDS-2677
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: freon, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Found this in a local acceptance test run:
> {code}
> Start freon testing   | FAIL |
> '2019-12-05 12:25:24,744 [main] INFO impl.MetricsConfig: Loaded properties 
> from hadoop-metrics2.properties
> 2019-12-05 12:25:24,934 [main] INFO impl.MetricsSystemImpl: Scheduled Metric 
> snapshot period at 10 second(s).
> 2019-12-05 12:25:24,935 [main] INFO impl.MetricsSystemImpl: ozone-freon 
> metrics system started
> 2019-12-05 12:25:26,690 [main] INFO freon.RandomKeyGenerator: Number of 
> Threads: 1
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of 
> Volumes: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of 
> Buckets per Volume: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Number of Keys 
> per Bucket: 5.
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Key size: 10240 
> bytes
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: Buffer size: 
> 4096 bytes
> 2019-12-05 12:25:26,691 [main] INFO freon.RandomKeyGenerator: validateWrites 
> : false
> [ Message content over the limit has been removed. ]
> util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ***
> Status: Success
> Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
> Number of Volumes created: 5
> Number of Buckets created: 25
> Number of Keys added: 125
> Ratis replication factor: THREE
> Ratis replication type: RATIS
> Average Time spent in volume creation: 00:00:00,210
> Average Time spent in bucket creation: 00:00:00,213
> Average Time spent in key creation: 00:00:37,506
> Average Time spent in key write: 00:01:42,157
> Total bytes written: 128
> Total Execution time: 00:02:31,516
> ***' contains 'ERROR'
> {code}
> Need to check if {{Status: Success}} is true (ie. if keys were indeed 
> successfully created), and if so, {{ERROR}} in output should make not the 
> test failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2678) Add thread name to log pattern

2019-12-05 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2678:
---
Status: Patch Available  (was: In Progress)

> Add thread name to log pattern
> --
>
> Key: HDDS-2678
> URL: https://issues.apache.org/jira/browse/HDDS-2678
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone's default log4j patterns should include thread name, as it helps a bit 
> in understanding events.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2684) Refactor common test utilities to hadoop-hdds/common

2019-12-06 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2684:
--

 Summary: Refactor common test utilities to hadoop-hdds/common
 Key: HDDS-2684
 URL: https://issues.apache.org/jira/browse/HDDS-2684
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Expose test code from {{hadoop-hdds/common}} to other modules.  Move some 
"common" test utilities.  Example: random {{DatanodeDetails}} creation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-12-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-1812:
---
Status: In Progress  (was: Patch Available)

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2684) Refactor common test utilities to hadoop-hdds/common

2019-12-06 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2684:
---
Status: Patch Available  (was: In Progress)

> Refactor common test utilities to hadoop-hdds/common
> 
>
> Key: HDDS-2684
> URL: https://issues.apache.org/jira/browse/HDDS-2684
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Expose test code from {{hadoop-hdds/common}} to other modules.  Move some 
> "common" test utilities.  Example: random {{DatanodeDetails}} creation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988553#comment-16988553
 ] 

Attila Doroszlai commented on HDDS-2667:


Thanks [~Sammi] for reporting this.  I can reproduce it by enabling RocksDB 
stats.  Can you please confirm if you have 
{{ozone.metastore.rocksdb.statistics}} enabled in {{ozone-site.xml}}?

> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2667:
---
Description: 
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

{code}
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"
{code}

The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
formed by the other 3 datanodes which are all DOWN. 

  was:
Promethues version 2.14.0

configuration set in promethues.yml, in which 10.120.110.183 is the master with 
OM and SCM. All others are datanodes.

scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped 
from this config.
  - job_name: 'ozone'
metrics_path: /prom

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets:
  - "10.120.110.183:8080"
  - "10.120.110.183:8081"
  - "10.120.139.122:9882"
  - "10.120.139.111:9882"
  - "10.120.113.172:9882"
  - "10.121.124.44:9882"

The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
formed by the other 3 datanodes which are all DOWN. 


> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Attila Doroszlai
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> {code}
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> {code}
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2667) Promethues reports invalid metric type

2019-12-04 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2667:
--

Assignee: Attila Doroszlai

> Promethues reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Attila Doroszlai
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2667) Prometheus reports invalid metric type

2019-12-05 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2667:
---
Summary: Prometheus reports invalid metric type  (was: Promethues reports 
invalid metric type)

> Prometheus reports invalid metric type
> --
>
> Key: HDDS-2667
> URL: https://issues.apache.org/jira/browse/HDDS-2667
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Attila Doroszlai
>Priority: Major
> Attachments: promethues-targets.png
>
>
> Promethues version 2.14.0
> configuration set in promethues.yml, in which 10.120.110.183 is the master 
> with OM and SCM. All others are datanodes.
> {code}
> scrape_configs:
>   # The job name is added as a label `job=` to any timeseries 
> scraped from this config.
>   - job_name: 'ozone'
> metrics_path: /prom
> # metrics_path defaults to '/metrics'
> # scheme defaults to 'http'.
> static_configs:
> - targets:
>   - "10.120.110.183:8080"
>   - "10.120.110.183:8081"
>   - "10.120.139.122:9882"
>   - "10.120.139.111:9882"
>   - "10.120.113.172:9882"
>   - "10.121.124.44:9882"
> {code}
> The UP datanoe 122 is not part of the THREE factor pipeline.  The later is 
> formed by the other 3 datanodes which are all DOWN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2688) Refactor container response builders to hdds-common

2019-12-07 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2688:
--

 Summary: Refactor container response builders to hdds-common
 Key: HDDS-2688
 URL: https://issues.apache.org/jira/browse/HDDS-2688
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ContainerUtils}} and {{BlockUtils}} have some helper functions to build 
responses to container commands.  These would be useful for client-side unit 
tests, but {{client}} does not depend on {{container-service}} since the 
interfaces and messages it needs are defined in {{common}}.  This issue 
proposes to move these helpers to {{common}} to avoid duplicating the 
functionality for tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2688) Refactor container response builders to hdds-common

2019-12-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2688:
---
Status: Patch Available  (was: In Progress)

> Refactor container response builders to hdds-common
> ---
>
> Key: HDDS-2688
> URL: https://issues.apache.org/jira/browse/HDDS-2688
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ContainerUtils}} and {{BlockUtils}} have some helper functions to build 
> responses to container commands.  These would be useful for client-side unit 
> tests, but {{client}} does not depend on {{container-service}} since the 
> interfaces and messages it needs are defined in {{common}}.  This issue 
> proposes to move these helpers to {{common}} to avoid duplicating the 
> functionality for tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-12-07 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2588:
---
Status: Patch Available  (was: In Progress)

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2691) ACL code cleanup

2019-12-07 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2691:
--

 Summary: ACL code cleanup
 Key: HDDS-2691
 URL: https://issues.apache.org/jira/browse/HDDS-2691
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Sonar reports lots of duplication around {{AclHandler}} classes in Ozone Shell.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2752) Unnecessary calls to isNoneEmpty and isAllEmpty

2019-12-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2752:
---
Status: Patch Available  (was: In Progress)

> Unnecessary calls to isNoneEmpty and isAllEmpty
> ---
>
> Key: HDDS-2752
> URL: https://issues.apache.org/jira/browse/HDDS-2752
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{isNoneEmpty}} and {{isAllEmpty}} check variable number of strings.  For 
> single string they can be replaced with {{isNotEmpty}} and {{isEmpty}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2739) No need to try install awscli before each test

2019-12-14 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2739:
---
Status: Patch Available  (was: In Progress)

> No need to try install awscli before each test
> --
>
> Key: HDDS-2739
> URL: https://issues.apache.org/jira/browse/HDDS-2739
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> S3 acceptance test attempts to install {{awscli}} prior to each test case.  
> It is enough to do so before each suite.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2627) Skip sonar check in forks

2019-11-24 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2627:
--

 Summary: Skip sonar check in forks
 Key: HDDS-2627
 URL: https://issues.apache.org/jira/browse/HDDS-2627
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


_unit_ step of Github Actions-based CI checks is failing for commits pushed to 
forks due to lack of {{SONARCLOUD_TOKEN}}.

Background: HDDS-2587 added Sonar check in post-commit workflow, publishing 
results to SonarCloud.  It does not work in forks, as it requires SonarCloud 
token.  This causes _unit_ step to fail completely.  Example: 
https://github.com/bharatviswa504/hadoop-ozone/runs/316829850



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-11-26 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-1812:
---
Status: Patch Available  (was: In Progress)

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2626) Avoid hostname lookup for invalid local IP addresses

2019-11-25 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2626:
---
Status: Patch Available  (was: In Progress)

> Avoid hostname lookup for invalid local IP addresses
> 
>
> Key: HDDS-2626
> URL: https://issues.apache.org/jira/browse/HDDS-2626
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{OzoneSecurityUtil#getValidInetsForCurrentHost}} performs hostname lookup 
> for all local network interfaces, even for invalid addresses.  This 
> significantly slows down some secure tests ({{TestHddsSecureDatanodeInit}}, 
> {{TestSecureOzoneCluster}}) when run on a machine with special IPv6 network 
> interfaces due to timeout reaching IPv6 DNS servers.
> This issue proposes to disable the lookup for invalid addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2627) Skip sonar check in forks

2019-11-25 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2627:
---
Status: Patch Available  (was: In Progress)

> Skip sonar check in forks
> -
>
> Key: HDDS-2627
> URL: https://issues.apache.org/jira/browse/HDDS-2627
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> _unit_ step of Github Actions-based CI checks is failing for commits pushed 
> to forks due to lack of {{SONARCLOUD_TOKEN}}.
> Background: HDDS-2587 added Sonar check in post-commit workflow, publishing 
> results to SonarCloud.  It does not work in forks, as it requires SonarCloud 
> token.  This causes _unit_ step to fail completely.  Example: 
> https://github.com/bharatviswa504/hadoop-ozone/runs/316829850



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2628) Make AuditMessage parameters strongly typed

2019-11-26 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2628:
--

 Summary: Make AuditMessage parameters strongly typed
 Key: HDDS-2628
 URL: https://issues.apache.org/jira/browse/HDDS-2628
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Improve type safety in {{AuditMessage$Builder}} for methods {{forOperation}} 
and {{withResult}} by using existing {{interface AuditAction}} and {{enum 
AuditEventStatus}} respectively instead of Strings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2632) Fix TestContainerPersistence#testDeleteChunk

2019-11-27 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2632:
--

 Summary: Fix TestContainerPersistence#testDeleteChunk
 Key: HDDS-2632
 URL: https://issues.apache.org/jira/browse/HDDS-2632
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{TestContainerPersistence#testDeleteChunk}} is failing due to unexpected 
exception message.  This is caused by mix of two commits:

 * https://github.com/apache/hadoop-ozone/commit/fe7fccf2b changed actual 
message
 * https://github.com/apache/hadoop-ozone/commit/4a9174500 moved the test case 
from integration to unit

Each of these was tested without the other.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2632) Fix TestContainerPersistence#testDeleteChunk

2019-11-27 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2632:
---
Status: Patch Available  (was: Open)

> Fix TestContainerPersistence#testDeleteChunk
> 
>
> Key: HDDS-2632
> URL: https://issues.apache.org/jira/browse/HDDS-2632
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestContainerPersistence#testDeleteChunk}} is failing due to unexpected 
> exception message.  This is caused by mix of two commits:
>  * https://github.com/apache/hadoop-ozone/commit/fe7fccf2b changed actual 
> message
>  * https://github.com/apache/hadoop-ozone/commit/4a9174500 moved the test 
> case from integration to unit
> Each of these was tested without the other.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2655) Use pre-compiled Pattern in NetUtils#normalize

2019-11-30 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2655:
---
Status: Patch Available  (was: In Progress)

> Use pre-compiled Pattern in NetUtils#normalize
> --
>
> Key: HDDS-2655
> URL: https://issues.apache.org/jira/browse/HDDS-2655
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{NetUtils#normalize}} uses {{String#replaceAll}}, which creates a 
> {{Pattern}} for each call.  It could be replaced with a pre-compiled 
> {{Pattern}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2653) Improve executor memory usage in new Freon tests

2019-11-30 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2653:
---
Status: Patch Available  (was: In Progress)

> Improve executor memory usage in new Freon tests
> 
>
> Key: HDDS-2653
> URL: https://issues.apache.org/jira/browse/HDDS-2653
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> New Freon tests (descendants of {{BaseFreonGenerator}}) suffer from a similar 
> memory issue due to concurrency handling as HDDS-1785.
> Steps to reproduce:
> {code}
> export HADOOP_OPTS='-Xmx256M -XX:+HeapDumpOnOutOfMemoryError'
> ozone freon omkg -F ONE -n 33554432 -t 10 -p omkg
> {code}
> Freon attempts to submit 32M tasks to the executor, requiring at least 1.5GB 
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2655) Use pre-compiled Pattern in NetUtils#normalize

2019-11-30 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2655:
--

 Summary: Use pre-compiled Pattern in NetUtils#normalize
 Key: HDDS-2655
 URL: https://issues.apache.org/jira/browse/HDDS-2655
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{NetUtils#normalize}} uses {{String#replaceAll}}, which creates a {{Pattern}} 
for each call.  It could be replaced with a pre-compiled {{Pattern}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2653) Improve executor memory usage in new Freon tests

2019-11-30 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2653:
--

 Summary: Improve executor memory usage in new Freon tests
 Key: HDDS-2653
 URL: https://issues.apache.org/jira/browse/HDDS-2653
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: freon
Affects Versions: 0.5.0
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


New Freon tests (descendants of {{BaseFreonGenerator}}) suffer from a similar 
memory issue due to concurrency handling as HDDS-1785.

Steps to reproduce:

{{code}}
export HADOOP_OPTS='-Xmx256M -XX:+HeapDumpOnOutOfMemoryError'
ozone freon omkg -F ONE -n 33554432 -t 10 -p omkg
{code}

Freon attempts to submit 32M tasks to the executor, requiring at least 1.5GB 
memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2653) Improve executor memory usage in new Freon tests

2019-11-30 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2653:
---
Description: 
New Freon tests (descendants of {{BaseFreonGenerator}}) suffer from a similar 
memory issue due to concurrency handling as HDDS-1785.

Steps to reproduce:

{code}
export HADOOP_OPTS='-Xmx256M -XX:+HeapDumpOnOutOfMemoryError'
ozone freon omkg -F ONE -n 33554432 -t 10 -p omkg
{code}

Freon attempts to submit 32M tasks to the executor, requiring at least 1.5GB 
memory.

  was:
New Freon tests (descendants of {{BaseFreonGenerator}}) suffer from a similar 
memory issue due to concurrency handling as HDDS-1785.

Steps to reproduce:

{{code}}
export HADOOP_OPTS='-Xmx256M -XX:+HeapDumpOnOutOfMemoryError'
ozone freon omkg -F ONE -n 33554432 -t 10 -p omkg
{code}

Freon attempts to submit 32M tasks to the executor, requiring at least 1.5GB 
memory.


> Improve executor memory usage in new Freon tests
> 
>
> Key: HDDS-2653
> URL: https://issues.apache.org/jira/browse/HDDS-2653
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> New Freon tests (descendants of {{BaseFreonGenerator}}) suffer from a similar 
> memory issue due to concurrency handling as HDDS-1785.
> Steps to reproduce:
> {code}
> export HADOOP_OPTS='-Xmx256M -XX:+HeapDumpOnOutOfMemoryError'
> ozone freon omkg -F ONE -n 33554432 -t 10 -p omkg
> {code}
> Freon attempts to submit 32M tasks to the executor, requiring at least 1.5GB 
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2654) Conditionally enable profiling at the kernel level

2019-11-30 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2654:
--

 Summary: Conditionally enable profiling at the kernel level
 Key: HDDS-2654
 URL: https://issues.apache.org/jira/browse/HDDS-2654
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: docker
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Extend {{entrypoint.sh}} to set the kernel parameters required for profiling if 
the {{ASYNC_PROFILER_ENABLED}} environment variable is set.

Ref:

{code:title=https://cwiki.apache.org/confluence/display/HADOOP/Java+Profiling+of+Ozone}
echo 1 > /proc/sys/kernel/perf_event_paranoid
echo 0 > /proc/sys/kernel/kptr_restrict
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2657) Key get command creates the output file even in case of KEY_NOT_FOUND

2019-12-02 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986274#comment-16986274
 ] 

Attila Doroszlai commented on HDDS-2657:


Thanks [~ayushtkn] for reporting this issue.  I have tested the fix, it works 
fine.

> Key get command creates the output file even in case of KEY_NOT_FOUND
> -
>
> Key: HDDS-2657
> URL: https://issues.apache.org/jira/browse/HDDS-2657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
> Attachments: HDDS-2657-01.patch
>
>
> If a key isn't found, the key get command still creates an output file of 0 
> size.
> {noformat}
> ayush@ayushpc:~/ozone/hadoop-ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/bin$
>  ll
> total 36
> drwxr-xr-x  3 ayush ayush  4096 Dec  2 22:41 ./
> drwxr-xr-x 13 ayush ayush  4096 Nov 30 19:39 ../
> -rwxr-xr-x  1 ayush ayush 12786 Nov 30 19:35 ozone*
> ayush@ayushpc:~/ozone/hadoop-ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/bin$
>  ./ozone sh key get "hive/bucket/key 21" "file 21"
> KEY_NOT_FOUND Key not found
> ayush@ayushpc:~/ozone/hadoop-ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/bin$
>  ll
> total 36
> drwxr-xr-x  3 ayush ayush  4096 Dec  2 22:42  ./
> drwxr-xr-x 13 ayush ayush  4096 Nov 30 19:39  ../
> -rw-r--r--  1 ayush ayush 0 Dec  2 22:42 'file 21'
> -rwxr-xr-x  1 ayush ayush 12786 Nov 30 19:35  ozone*
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2654) Conditionally enable profiling at the kernel level

2019-12-02 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2654:
---
Description: 
Extend {{entrypoint.sh}} to set the kernel parameters required for profiling if 
the {{ASYNC_PROFILER_HOME}} environment variable (used by {{ProfileServlet}}) 
is set.

Ref:

{code:title=https://cwiki.apache.org/confluence/display/HADOOP/Java+Profiling+of+Ozone}
echo 1 > /proc/sys/kernel/perf_event_paranoid
echo 0 > /proc/sys/kernel/kptr_restrict
{code}

  was:
Extend {{entrypoint.sh}} to set the kernel parameters required for profiling if 
the {{ASYNC_PROFILER_ENABLED}} environment variable is set.

Ref:

{code:title=https://cwiki.apache.org/confluence/display/HADOOP/Java+Profiling+of+Ozone}
echo 1 > /proc/sys/kernel/perf_event_paranoid
echo 0 > /proc/sys/kernel/kptr_restrict
{code}


> Conditionally enable profiling at the kernel level
> --
>
> Key: HDDS-2654
> URL: https://issues.apache.org/jira/browse/HDDS-2654
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> Extend {{entrypoint.sh}} to set the kernel parameters required for profiling 
> if the {{ASYNC_PROFILER_HOME}} environment variable (used by 
> {{ProfileServlet}}) is set.
> Ref:
> {code:title=https://cwiki.apache.org/confluence/display/HADOOP/Java+Profiling+of+Ozone}
> echo 1 > /proc/sys/kernel/perf_event_paranoid
> echo 0 > /proc/sys/kernel/kptr_restrict
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2656) Prefer execute() over submit() if the returned Future is ignored

2019-12-01 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2656:
--

 Summary: Prefer execute() over submit() if the returned Future is 
ignored
 Key: HDDS-2656
 URL: https://issues.apache.org/jira/browse/HDDS-2656
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{Future submit(Runnable)}} and {{void execute(Runnable)}} in 
{{ExecutorService}} have the same result.  If the returned {{Future}} is 
ignored, {{execute}} can be used instead of {{submit}} to avoid creating some 
objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2654) Conditionally enable profiling at the kernel level

2019-12-02 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2654:
---
Status: Patch Available  (was: In Progress)

> Conditionally enable profiling at the kernel level
> --
>
> Key: HDDS-2654
> URL: https://issues.apache.org/jira/browse/HDDS-2654
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Extend {{entrypoint.sh}} to set the kernel parameters required for profiling 
> if the {{ASYNC_PROFILER_HOME}} environment variable (used by 
> {{ProfileServlet}}) is set.
> Ref:
> {code:title=https://cwiki.apache.org/confluence/display/HADOOP/Java+Profiling+of+Ozone}
> echo 1 > /proc/sys/kernel/perf_event_paranoid
> echo 0 > /proc/sys/kernel/kptr_restrict
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2475) Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-01 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-2475:
--

Assignee: Attila Doroszlai

> Unregister ContainerMetadataScrubberMetrics on thread exit
> --
>
> Key: HDDS-2475
> URL: https://issues.apache.org/jira/browse/HDDS-2475
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> {{ContainerMetadataScanner}} thread should call 
> {{ContainerMetadataScrubberMetrics#unregister}} before exiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2475) Unregister ContainerMetadataScrubberMetrics on thread exit

2019-12-02 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2475:
---
Status: Patch Available  (was: In Progress)

> Unregister ContainerMetadataScrubberMetrics on thread exit
> --
>
> Key: HDDS-2475
> URL: https://issues.apache.org/jira/browse/HDDS-2475
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ContainerMetadataScanner}} thread should call 
> {{ContainerMetadataScrubberMetrics#unregister}} before exiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2616) Sonar : reduce Cognitive Complexity of isDeletionAllowed in BlockDeletingService

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2616:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of isDeletionAllowed in 
> BlockDeletingService
> 
>
> Key: HDDS-2616
> URL: https://issues.apache.org/jira/browse/HDDS-2616
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> CC of isDeletionAllowed is reported as 21 :
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-8zKcVY8lQ4ZsTQ=AW5md-8zKcVY8lQ4ZsTQ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2618) Sonar : reduce Cognitive Complexity of readVolume in ContainerReader

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2618:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of readVolume in ContainerReader
> 
>
> Key: HDDS-2618
> URL: https://issues.apache.org/jira/browse/HDDS-2618
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> Sonar reports CC value of readVolume in ContainerReader is 31:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9fKcVY8lQ4ZsUU=AW5md-9fKcVY8lQ4ZsUU



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2615) Sonar : reduce Cognitive Complexity of scanData in KeyValueContainerCheck

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2615:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of scanData in KeyValueContainerCheck
> -
>
> Key: HDDS-2615
> URL: https://issues.apache.org/jira/browse/HDDS-2615
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> CC of scanData method is reported as 31 :
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-82KcVY8lQ4ZsTS=AW5md-82KcVY8lQ4ZsTS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2617) Sonar : reduce Cognitive Complexity of runIteration in ContainerDataScanner

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2617:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of runIteration in ContainerDataScanner
> ---
>
> Key: HDDS-2617
> URL: https://issues.apache.org/jira/browse/HDDS-2617
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> CC value of runIteration is reported as 19:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUY=AW5md-9kKcVY8lQ4ZsUY
> minor, duplicate string literal issue in same file:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUW=AW5md-9kKcVY8lQ4ZsUW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2612) Sonar : Reduce Cognitive Complexity of dispatchRequest in HddsDispatcher

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2612:
---
Labels: sonar  (was: )

> Sonar : Reduce Cognitive Complexity of dispatchRequest in HddsDispatcher
> 
>
> Key: HDDS-2612
> URL: https://issues.apache.org/jira/browse/HDDS-2612
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> Sonar reports CC of dispatchRequest is 49, reduce to 15.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6ZKcVY8lQ4ZsQs=AW5md-6ZKcVY8lQ4ZsQs
> minor duplicate string literal issue in same source file.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6ZKcVY8lQ4ZsQo=AW5md-6ZKcVY8lQ4ZsQo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2614) Sonar : reduce Cognitive Complexity of applyTransaction in ContainerStateMachine

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2614:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of applyTransaction in 
> ContainerStateMachine
> 
>
> Key: HDDS-2614
> URL: https://issues.apache.org/jira/browse/HDDS-2614
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> Sonar reports CC of applyTransaction is 22:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRU=AW5md-65KcVY8lQ4ZsRU
> minor(Sonar "critical") duplicate string literal issues:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRM=AW5md-65KcVY8lQ4ZsRM
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRN=AW5md-65KcVY8lQ4ZsRN
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRO=AW5md-65KcVY8lQ4ZsRO



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2613) Sonar : reduce Cognitive Complexity of deleteBlocksCommandHandler

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2613:
---
Labels: sonar  (was: )

> Sonar : reduce Cognitive Complexity of deleteBlocksCommandHandler
> -
>
> Key: HDDS-2613
> URL: https://issues.apache.org/jira/browse/HDDS-2613
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>  Labels: sonar
>
> Sonar reports CC of deleteKeyValueContainerBlocks is 25:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7XKcVY8lQ4ZsRr=AW5md-7XKcVY8lQ4ZsRr



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2611) Sonar : resolve issues reported in ContainerSet

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2611:
---
Labels: sonar  (was: )

> Sonar : resolve issues reported in ContainerSet
> ---
>
> Key: HDDS-2611
> URL: https://issues.apache.org/jira/browse/HDDS-2611
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: sonar
>
> Sonar issues:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQf=AW5md-6RKcVY8lQ4ZsQf
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQg=AW5md-6RKcVY8lQ4ZsQg
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQh=AW5md-6RKcVY8lQ4ZsQh
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQi=AW5md-6RKcVY8lQ4ZsQi
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQj=AW5md-6RKcVY8lQ4ZsQj
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQk=AW5md-6RKcVY8lQ4ZsQk
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6RKcVY8lQ4ZsQl=AW5md-6RKcVY8lQ4ZsQl



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2604) scmcli pipeline deactivate command not working

2019-11-22 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2604.

Resolution: Duplicate

Hi [~nilotpalnandi], thanks for reporting this issue.  It is already fixed on 
master branch as part of HDDS-2468.

> scmcli pipeline deactivate command not working
> --
>
> Key: HDDS-2604
> URL: https://issues.apache.org/jira/browse/HDDS-2604
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
>
> scmcli pipeline deactivate not working
>  
> output :
> {noformat}
> ozone scmcli pipeline deactivate 212e1f47-4890-49c2-a950-4d0b3a70cbfd
> Unknown command type: DeactivatePipeline
> root@st-ozone-kg2qce-l2ltm:/ansible# echo $?
> 255{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2618) Sonar : reduce Cognitive Complexity of readVolume in ContainerReader

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2618:
---
Summary: Sonar : reduce Cognitive Complexity of readVolume in 
ContainerReader  (was: Sonar : reduce Cognitive Complexity of applyTransaction 
in ContainerStateMachinegnitive Complexity of readVolume in ContainerReader)

> Sonar : reduce Cognitive Complexity of readVolume in ContainerReader
> 
>
> Key: HDDS-2618
> URL: https://issues.apache.org/jira/browse/HDDS-2618
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Critical
>
> Sonar reports CC value of readVolume in ContainerReader is 31:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9fKcVY8lQ4ZsUU=AW5md-9fKcVY8lQ4ZsUU



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2609) Move plain unit tests out of integration tests

2019-11-21 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2609:
---
Status: Patch Available  (was: In Progress)

> Move plain unit tests out of integration tests
> --
>
> Key: HDDS-2609
> URL: https://issues.apache.org/jira/browse/HDDS-2609
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some test classes in {{integration-test}} are actually unit tests: they do 
> not start a mini cluster, nor even multiple components.  These can be moved 
> to the subprojects they belong to (eg. {{container-service}}.  The benefit is 
> that it will be easier to spot if they are broken, since integration tests 
> are executed less frequently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2591) No tailMap needed for startIndex 0 in ContainerSet#listContainer

2019-11-21 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979928#comment-16979928
 ] 

Attila Doroszlai commented on HDDS-2591:


Thanks [~bharat], I submitted a PR to fix it.

> No tailMap needed for startIndex 0 in ContainerSet#listContainer
> 
>
> Key: HDDS-2591
> URL: https://issues.apache.org/jira/browse/HDDS-2591
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ContainerSet#listContainer}} has this code:
> {code:title=https://github.com/apache/hadoop-ozone/blob/3c334f6a7b344e0e5f52fec95071c369286cfdcb/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java#L198}
> map = containerMap.tailMap(containerMap.firstKey(), true);
> {code}
> This is equivalent to:
> {code}
> map = containerMap;
> {code}
> since {{tailMap}} is a sub-map with all keys larger than or equal to 
> ({{inclusive=true}}) {{firstKey}}, which is the lowest key in the map.  So it 
> is a sub-map with all keys, ie. the whole map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2626) Avoid hostname lookup for invalid local IP addresses

2019-11-24 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2626:
--

 Summary: Avoid hostname lookup for invalid local IP addresses
 Key: HDDS-2626
 URL: https://issues.apache.org/jira/browse/HDDS-2626
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{OzoneSecurityUtil#getValidInetsForCurrentHost}} performs hostname lookup for 
all local network interfaces, even for invalid addresses.  This significantly 
slows down some secure tests ({{TestHddsSecureDatanodeInit}}, 
{{TestSecureOzoneCluster}}) when run on a machine with special IPv6 network 
interfaces due to timeout reaching IPv6 DNS servers.

This issue proposes to disable the lookup for invalid addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2628) Make AuditMessage parameters strongly typed

2019-11-28 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2628:
---
Status: Patch Available  (was: In Progress)

> Make AuditMessage parameters strongly typed
> ---
>
> Key: HDDS-2628
> URL: https://issues.apache.org/jira/browse/HDDS-2628
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improve type safety in {{AuditMessage$Builder}} for methods {{forOperation}} 
> and {{withResult}} by using existing {{interface AuditAction}} and {{enum 
> AuditEventStatus}} respectively instead of Strings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2610) Fix the ObjectStore#listVolumes failure when argument is null

2019-11-22 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980022#comment-16980022
 ] 

Attila Doroszlai commented on HDDS-2610:


Hi [~cxorm], I think the link in the description is not the right one, as the 
method's parameter is {{volumePrefix}}, not related to {{userName}}.  This one 
seems to better indicate to mismatch in behavior: 
[VolumeManager#listVolumes|https://github.com/apache/hadoop-ozone/blob/a731eeaa9ed0d1faecda3665b599145316300101/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManager.java#L84-L101]

> Fix the ObjectStore#listVolumes failure when argument is null
> -
>
> Key: HDDS-2610
> URL: https://issues.apache.org/jira/browse/HDDS-2610
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: YiSheng Lien
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As the description of the 
> [listVolumes|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java#L218],
>  we would list all volumes when setting the argument null.
> But now it throws OMException by underlying method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2721) Allow wrapping list of ByteBuffers with ChunkBuffer

2019-12-19 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2721:
---
Status: Patch Available  (was: In Progress)

> Allow wrapping list of ByteBuffers with ChunkBuffer
> ---
>
> Key: HDDS-2721
> URL: https://issues.apache.org/jira/browse/HDDS-2721
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ChunkBuffer}} is a useful abstraction over {{ByteBuffer}} to hide whether 
> it's backed by a single {{ByteBuffer}} or multiple ones 
> ({{IncrementalChunkBuffer}}).  However, {{IncrementalChunkBuffer}} allocates 
> its own {{ByteBuffer}} instances and only works with uniform buffer sizes.  
> The goal of this task is to allow wrapping an existing  {{List}} 
> in {{ChunkBuffer}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2762) Fix Sonar warnings in SCMBlockProtocolServer and BlockOutputStreamEntryPool

2019-12-19 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2762.

Resolution: Fixed

> Fix Sonar warnings in SCMBlockProtocolServer and BlockOutputStreamEntryPool
> ---
>
> Key: HDDS-2762
> URL: https://issues.apache.org/jira/browse/HDDS-2762
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?fileUuids=AW5md-HeKcVY8lQ4ZrXe=hadoop-ozone=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2086) ReconServer throws SQLException but path present for ozone.recon.db.dir in ozone-site

2019-12-18 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999223#comment-16999223
 ] 

Attila Doroszlai commented on HDDS-2086:


The NPE I mentioned is being fixed in HDDS-2776.

> ReconServer throws SQLException but path present for ozone.recon.db.dir in 
> ozone-site
> -
>
> Key: HDDS-2086
> URL: https://issues.apache.org/jira/browse/HDDS-2086
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
>
> java.sql.SQLException: path to 
> '/${ozone.recon.db.dir}/ozone_recon_sqlite.db': '/${ozone.recon.db.dir}' does 
> not exist
> But property present in ozone-site.xml:
> 
> ozone.recon.db.dir
> /tmp/metadata
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2773) Configure Goldmark renderer

2019-12-17 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2773:
--

 Summary: Configure Goldmark renderer
 Key: HDDS-2773
 URL: https://issues.apache.org/jira/browse/HDDS-2773
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: documentation
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Starting with Hugo 0.60, the new Goldmark renderer is configured to skip HTML 
fragments.  This breaks the doc layout in a few places, eg.:

 * _Easy start_, _Recommended_ etc. headers
 * tables
 * warning {{div}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2774) Hadoop 3.1 acceptance test fails with apk command not found

2019-12-18 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16998951#comment-16998951
 ] 

Attila Doroszlai commented on HDDS-2774:


Two possible approaches:
 # [use fix hash for Alpine-based 3.1.2 
image|https://github.com/adoroszlai/hadoop-ozone/runs/354082082]
 # [change {{apk}} command to 
{{yum}}|https://github.com/adoroszlai/hadoop-ozone/runs/354085547]

> Hadoop 3.1 acceptance test fails with apk command not found
> ---
>
> Key: HDDS-2774
> URL: https://issues.apache.org/jira/browse/HDDS-2774
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>
> {{ozone-mr/hadoop31}} test is failing with:
> {code}
> sudo: apk: command not found
> {code}
> New hadoop:3.1.2 image is based on CentOS, not Alpine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2774) Hadoop 3.1 acceptance test fails with apk command not found

2019-12-18 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2774:
--

 Summary: Hadoop 3.1 acceptance test fails with apk command not 
found
 Key: HDDS-2774
 URL: https://issues.apache.org/jira/browse/HDDS-2774
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ozone-mr/hadoop31}} test is failing with:

{code}
sudo: apk: command not found
{code}

New hadoop:3.1.2 image is based on CentOS, not Alpine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2774) Hadoop 3.1 acceptance test fails with apk command not found

2019-12-18 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16998960#comment-16998960
 ] 

Attila Doroszlai commented on HDDS-2774:


First one failed with {{manifest for flokkr/hadoop:33266e62ac3f not found: 
manifest unknown: manifest unknown}}

> Hadoop 3.1 acceptance test fails with apk command not found
> ---
>
> Key: HDDS-2774
> URL: https://issues.apache.org/jira/browse/HDDS-2774
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>
> {{ozone-mr/hadoop31}} test is failing with:
> {code}
> sudo: apk: command not found
> {code}
> New hadoop:3.1.2 image is based on CentOS, not Alpine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2784) Read to ByteBuffer uses wrong offset

2019-12-20 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2784:
--

 Summary: Read to ByteBuffer uses wrong offset
 Key: HDDS-2784
 URL: https://issues.apache.org/jira/browse/HDDS-2784
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{OzoneFSInputStream#read(ByteBuffer)}} uses the target buffer's position for 
offsetting into the temporary array:

{code:title=https://github.com/apache/hadoop-ozone/blob/b834fa48afef4ee4c73577c7af564e1e97cb9d5b/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java#L90-L97}
  public int read(ByteBuffer buf) throws IOException {

int bufInitPos = buf.position();
int readLen = Math.min(buf.remaining(), inputStream.available());

byte[] readData = new byte[readLen];
int bytesRead = inputStream.read(readData, bufInitPos, readLen);
buf.put(readData);
{code}

Given a buffer with capacity=10 and position=8, this results in the following:

 * {{readLen}} = 2 => {{readData.length}} = 2
 * {{bufInitPos}} = 8

So {{inputStream}} reads 2 bytes and writes it into {{readData}} starting at 
offset 8, which results in an {{IndexOutOfBoundsException}}.

offset should always be 0, since the temporary array is sized exactly for the 
length to read, and it has no extra data at the start.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2777) Add bytes read statistics to Ozone FileSystem implementation

2019-12-20 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2777:
---
Status: Patch Available  (was: Open)

> Add bytes read statistics to Ozone FileSystem implementation
> 
>
> Key: HDDS-2777
> URL: https://issues.apache.org/jira/browse/HDDS-2777
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>
> In Hive, or in MR jobs the FileSystem counters are reported based on the 
> statistics inside the FileSystem implementation, at the moment we do not have 
> any read bytes statistics reported, while we have the number of bytes written 
> and the read and write operation count.
> This JIRA is to add the number of bytes read statistics and record it in the 
> FileSystem implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2785) Move ozonefs and tools minicluster tests to integration-test

2019-12-20 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2785:
--

 Summary: Move ozonefs and tools minicluster tests to 
integration-test
 Key: HDDS-2785
 URL: https://issues.apache.org/jira/browse/HDDS-2785
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ozonefs}} and {{tools}} modules in {{hadoop-ozone}} have a mix of unit and 
integration tests.  This issue proposes to

# switch dependency order: let {{integration-test}} depend on these modules 
instead of the other way around
# move integration tests (those that use {{Mini*Cluster}}) from these modules 
to {{integration-test}}
# let {{unit}} check run remaining tests in these modules

This improves code coverage in CI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2277) Consider allowing maintenance end time to be specified in human readable format

2019-12-20 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2277:
---
Labels:   (was: pull-request-available)

> Consider allowing maintenance end time to be specified in human readable 
> format
> ---
>
> Key: HDDS-2277
> URL: https://issues.apache.org/jira/browse/HDDS-2277
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Stephen O'Donnell
>Assignee: Nanda kumar
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The initial command for maintenance mode allows a use to specify the number 
> of hours when maintenance will end.
> It may be a better user experience to allow them to specific the time like:
> 1.5 days
> 1 day
> 10 hours
> etc
> We should consider whether it makes sense to add this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >