[jira] [Updated] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1511:

Attachment: HDDS-1511.001.patch

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1511.000.patch, HDDS-1511.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Supratim Deka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836892#comment-16836892
 ] 

Supratim Deka commented on HDDS-1511:
-

addressed comment from [~arpitagarwal] in patch 001. will add a pull request as 
well.

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1511.000.patch, HDDS-1511.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-700) Support rack awared node placement policy based on network topology

2019-05-09 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836863#comment-16836863
 ] 

Xiaoyu Yao commented on HDDS-700:
-

Thanks [~Sammi] for the patch. It LGTM overall. Here are a few comments:
DatanodeDetails.java
 
Line 266: can we update the toString to include the network location 
information?
 
Line 353: if the caller does not set the location in the case of no topology 
defined, should we have a default location like/default-location or 
/default-rack.
 
SCMContainerPlacementRackAware.java
Line 77: should we consider redefine the interface to cover case where we want 
include the affinity node in the first node selection as well? For example, 
client inside the cluster may prefer to write first copy to local container if 
possible.
 
Line 84: should we consider some fallback strategy in certain cases where we 
allow allocation succeeds but 
Recover later when the placement if possible later?
 
Line 88: NIT: can we rename it to chosenNodes?
 
Line 101-122: some of the logic can be consolidated.
 
Line 160: can we need to add some logic to avoid infinite loop in node 
selection? 

> Support rack awared node placement policy based on network topology
> ---
>
> Key: HDDS-700
> URL: https://issues.apache.org/jira/browse/HDDS-700
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
> Attachments: HDDS-700.01.patch
>
>
> Implement a new container placement policy implementation based datanode's 
> network topology.  It follows the same rule as HDFS.
> By default with 3 replica, two replica will be on the same rack, the third 
> replica and all the remaining replicas will be on different racks. 
>  
> {color:#808080} {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1511?focusedWorklogId=240016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240016
 ]

ASF GitHub Bot logged work on HDDS-1511:


Author: ASF GitHub Bot
Created on: 10/May/19 03:01
Start Date: 10/May/19 03:01
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #811: HDDS-1511. Space 
tracking for Open Containers in HDDS Volumes. Contri…
URL: https://github.com/apache/hadoop/pull/811
 
 
   …buted by Supratim Deka
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 240016)
Time Spent: 10m
Remaining Estimate: 0h

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1511.000.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1511:
-
Labels: pull-request-available  (was: )

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836829#comment-16836829
 ] 

Arpit Agarwal commented on HDDS-1511:
-

Thanks for the patch [~sdeka].
{code}
// reduce space committed for this container equal to bytes written
if (committedSpace) {
  this.getVolume().incCommittedBytes(0 - bytes);
}
{code}

If the container grows beyond its max size which can happen, then we can 
potentially subtract more than was originally committed. So we should probably 
check for this case. Perhaps with something like:
{code}
if (committedSpace && getWriteBytes() < getMaxSize()) {
  this.getVolume().incCommittedBytes(0 - bytes);
{code}

We may have to think of the synchronization. We also need the same fix in 
{{releaseCommitSpace}}.

Also committedBytes is not currently subtracted from the available space. I 
assume you will do that in a separate patch.

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=240005=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240005
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 10/May/19 01:59
Start Date: 10/May/19 01:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-491126699
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 391 | trunk passed |
   | +1 | compile | 197 | trunk passed |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 786 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 425 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 439 | the patch passed |
   | +1 | compile | 192 | the patch passed |
   | +1 | javac | 192 | the patch passed |
   | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 631 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | the patch passed |
   | +1 | findbugs | 430 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 131 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1308 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6389 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b4b5ff3f75e2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b832e17 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/testReport/ |
   | Max. process+thread count | 5060 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 240005)
Time Spent: 6h 10m  (was: 6h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Work logged] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1491?focusedWorklogId=240002=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240002
 ]

ASF GitHub Bot logged work on HDDS-1491:


Author: ASF GitHub Bot
Created on: 10/May/19 01:45
Start Date: 10/May/19 01:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #795: HDDS-1491. Ozone 
KeyInputStream seek() should not read the chunk file.
URL: https://github.com/apache/hadoop/pull/795#issuecomment-491124559
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 406 | trunk passed |
   | +1 | compile | 201 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 240 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 422 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 404 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | the patch passed |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 160 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1387 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5799 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/795 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7c1109337df3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1b48100 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/testReport/ |
   | Max. process+thread count | 4749 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/client 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-795/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This 

[jira] [Commented] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836806#comment-16836806
 ] 

Hudson commented on HDDS-1474:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16536 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16536/])
HDDS-1474. "ozone.scm.datanode.id" config should take path for a dir and 
(koneru.hanisha: rev b832e174b0ef617ca05b34dfdefd183584b622f0)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) hadoop-hdds/docs/content/Settings.md
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonefs/docker-config
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone/config-configmap.yaml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonetrace/docker-config
* (edit) hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml


> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1491?focusedWorklogId=239990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239990
 ]

ASF GitHub Bot logged work on HDDS-1491:


Author: ASF GitHub Bot
Created on: 10/May/19 00:13
Start Date: 10/May/19 00:13
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #795: HDDS-1491. Ozone 
KeyInputStream seek() should not read the chunk file.
URL: https://github.com/apache/hadoop/pull/795#issuecomment-491110158
 
 
   Thank you @bshashikant for the review. I have addressed your review 
comments. 
   Also, there was a mistake in bufferPosition. It should be wrt the buffers 
corresponding to a chunk. Fixed that too.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239990)
Time Spent: 1h  (was: 50m)

> Ozone KeyInputStream seek() should not read the chunk file
> --
>
> Key: HDDS-1491
> URL: https://issues.apache.org/jira/browse/HDDS-1491
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> KeyInputStream#seek() calls BlockInputStream#seek() to adjust the buffer 
> position to the seeked position. As part of the seek operation, the whole 
> chunk is read from the container and stored in the buffer so that the buffer 
> position can be advanced to the seeked position. 
> We should not read from disk on a seek() operation. Instead, for a read 
> operation, when the chunk file is read and put in the buffer, at that time, 
> we can advance the buffer position to the previously seeked position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=239989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239989
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 10/May/19 00:12
Start Date: 10/May/19 00:12
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r282709424
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -41,16 +41,35 @@
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.concurrent.RejectedExecutionException;
 
 /**
  * Utility class for Ratis pipelines. Contains methods to create and destroy
  * ratis pipelines.
  */
-final class RatisPipelineUtils {
+public final class RatisPipelineUtils {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(RatisPipelineUtils.class);
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private static final int PARALLELISIM_FOR_POOL = 3;
+
+  private static final ForkJoinPool.ForkJoinWorkerThreadFactory FACTORY =
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239989)
Time Spent: 6h  (was: 5h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239987=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239987
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 10/May/19 00:11
Start Date: 10/May/19 00:11
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #809: 
HDDS-1474. "ozone.scm.datanode.id" config should take path for a dir and not a 
file
URL: https://github.com/apache/hadoop/pull/809
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239987)
Time Spent: 4h 40m  (was: 4.5h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=239988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239988
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 10/May/19 00:11
Start Date: 10/May/19 00:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #714: HDDS-1406. 
Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-491109791
 
 
   Thank You @lokeshj1703 for the review.
   I have addressed the review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239988)
Time Spent: 5h 50m  (was: 5h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239986
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 10/May/19 00:10
Start Date: 10/May/19 00:10
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #809: HDDS-1474. 
"ozone.scm.datanode.id" config should take path for a dir and not a file
URL: https://github.com/apache/hadoop/pull/809#issuecomment-491109589
 
 
   LGTM. +1. Will merge the PR. Thanks @vivekratnavel 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239986)
Time Spent: 4.5h  (was: 4h 20m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836788#comment-16836788
 ] 

Hadoop QA commented on HDDS-1495:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m 
45s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} hadolint {color} | {color:red}  0m  
2s{color} | {color:red} The patch generated 12 new + 0 unchanged - 0 fixed = 12 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
3s{color} | {color:orange} The patch generated 156 new + 0 unchanged - 0 fixed 
= 156 total (was 0) {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 5 new + 0 unchanged - 0 fixed = 5 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 30s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 12s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2682/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1495 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836776#comment-16836776
 ] 

Hadoop QA commented on HDDS-1511:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 37m 
36s{color} | {color:red} root in trunk has 6 extant findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} branch/hadoop-ozone/integration-test no findbugs 
output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 38s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} hadoop-ozone/integration-test has no data from 
findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
5s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}406m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.impl.TestContainerSet |
|   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-05-09 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836754#comment-16836754
 ] 

Konstantin Shvachko commented on HDFS-14245:


+1 for v 007 patch

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-09 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836747#comment-16836747
 ] 

Eric Yang commented on HDDS-1495:
-

Patch 007 imports docker-hadoop-runner branch code into docker image build.  
This will generate apache/ozone image same as apache/hadoop-runner:latest image 
for development purpose.  The release image will not have hardcoded uid/gid, 
dumb-init, or byteman dependency.

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1495) Create hadoop/ozone docker images with inline build process

2019-05-09 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1495:

Attachment: HDDS-1495.007.patch

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HDDS-1495
> URL: https://issues.apache.org/jira/browse/HDDS-1495
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16091.001.patch, HADOOP-16091.002.patch, 
> HDDS-1495.003.patch, HDDS-1495.004.patch, HDDS-1495.005.patch, 
> HDDS-1495.006.patch, HDDS-1495.007.patch, Hadoop Docker Image inline build 
> process.pdf
>
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239933=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239933
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 20:57
Start Date: 09/May/19 20:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #809: HDDS-1474. 
"ozone.scm.datanode.id" config should take path for a dir and not a file
URL: https://github.com/apache/hadoop/pull/809#issuecomment-491063721
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 411 | trunk passed |
   | +1 | compile | 206 | trunk passed |
   | +1 | checkstyle | 58 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | trunk passed |
   | 0 | spotbugs | 244 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 428 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 408 | the patch passed |
   | +1 | compile | 215 | the patch passed |
   | +1 | javac | 215 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 734 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | the patch passed |
   | +1 | findbugs | 449 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 156 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1272 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 6568 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-809/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/809 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs yamllint 
|
   | uname | Linux d192ceb1432e 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2d31ccc |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-809/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-809/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-809/1/testReport/ |
   | Max. process+thread count | 4641 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-809/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239933)
Time Spent: 4h 20m  (was: 4h 10m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=239929=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239929
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 09/May/19 20:46
Start Date: 09/May/19 20:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#issuecomment-491059971
 
 
   cc @anuengineer (I am not able to add you in reviewer, tagging here to 
notify about this)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239929)
Time Spent: 20m  (was: 10m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=239926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239926
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 09/May/19 20:44
Start Date: 09/May/19 20:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810
 
 
   1. In this Jira added double buffer implementation. (Not integrated to OM, 
this will be done in further jira's)
   2. Added a few response classes to give an idea, how this will be integrated 
with OM.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239926)
Time Spent: 10m
Remaining Estimate: 0h

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1512:
-
Labels: pull-request-available  (was: )

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1512:
-
Issue Type: Sub-task  (was: New Feature)
Parent: HDDS-505

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1512:
-
Description: 
This Jira is created to implement DoubleBuffer in OzoneManager to flush 
transactions to OM DB.

 
h2. Flushing Transactions to RocksDB:

We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
shall flush RocksDB transactions in batches, instead of current way of using 
rocksdb.put() after every operation. At a given time only one batch will be 
outstanding for flush while newer transactions are accumulated in memory to be 
flushed later.

 

In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
readyBuffer. We add entry to current buffer, and we check if another flush call 
is outstanding. If not, we flush to disk Otherwise we add entries to 
otherBuffer while sync is happening.

 

In this if sync is happening, we shall add new requests to other buffer and 
when we can sync we use *RocksDB batch commit to sync to disk, instead of 
rocksdb put.*

 

Note: If flush to disk is failed on any OM, we shall terminate the 
OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
considered as catastrophic failure.

 

Scope of this Jira is to add DoubleBuffer implementation, integrating to 
current OM will be done in further jira's.

 

  was:
This Jira is created to implement DoubleBuffer in OzoneManager to flush 
transactions to OM DB.

 
h2. Flushing Transactions to RocksDB:

We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
shall flush RocksDB transactions in batches, instead of current way of using 
rocksdb.put() after every operation. At a given time only one batch will be 
outstanding for flush while newer transactions are accumulated in memory to be 
flushed later.

 

In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
readyBuffer. We add entry to current buffer, and we check if another flush call 
is outstanding. If not, we flush to disk Otherwise we add entries to 
otherBuffer while sync is happening.

 

In this if sync is happening, we shall add new requests to other buffer and 
when we can sync we use *RocksDB batch commit to sync to disk, instead of 
rocksdb put.*

 

Note: If flush to disk is failed on any OM, we shall terminate the 
OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
considered as catastrophic failure.


> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-09 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1512:


 Summary: Implement DoubleBuffer in OzoneManager
 Key: HDDS-1512
 URL: https://issues.apache.org/jira/browse/HDDS-1512
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is created to implement DoubleBuffer in OzoneManager to flush 
transactions to OM DB.

 
h2. Flushing Transactions to RocksDB:

We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
shall flush RocksDB transactions in batches, instead of current way of using 
rocksdb.put() after every operation. At a given time only one batch will be 
outstanding for flush while newer transactions are accumulated in memory to be 
flushed later.

 

In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
readyBuffer. We add entry to current buffer, and we check if another flush call 
is outstanding. If not, we flush to disk Otherwise we add entries to 
otherBuffer while sync is happening.

 

In this if sync is happening, we shall add new requests to other buffer and 
when we can sync we use *RocksDB batch commit to sync to disk, instead of 
rocksdb put.*

 

Note: If flush to disk is failed on any OM, we shall terminate the 
OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
considered as catastrophic failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239920
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 20:04
Start Date: 09/May/19 20:04
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #809: HDDS-1474. 
"ozone.scm.datanode.id" config should take path for a dir and not a file
URL: https://github.com/apache/hadoop/pull/809#issuecomment-491046014
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239920)
Time Spent: 4h 10m  (was: 4h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836659#comment-16836659
 ] 

Arpit Agarwal commented on HDDS-1511:
-

bq. Perhaps a better investment is to write code that handles the container 
creation or chunk write failure, or come up with an algorithm to look at what 
HDFS is doing too.
[~anu], you are absolutely right. We should and we will make sure we robustly 
handle IO failures.

This particular Jira fixes a very limited scenario of putting new containers on 
obviously full volumes. It is not a substitute for robust failure handling.

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239903
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 19:06
Start Date: 09/May/19 19:06
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #809: 
HDDS-1474. "ozone.scm.datanode.id" config should take path for a dir and not a 
file
URL: https://github.com/apache/hadoop/pull/809
 
 
   Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
value. It should instead take dir path as its value and assume a standard 
filename "datanode.id"
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239903)
Time Spent: 4h  (was: 3h 50m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=239892=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239892
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 09/May/19 18:52
Start Date: 09/May/19 18:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-491023563
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 415 | trunk passed |
   | +1 | compile | 205 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 802 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 130 | trunk passed |
   | 0 | spotbugs | 237 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 417 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 396 | the patch passed |
   | +1 | compile | 204 | the patch passed |
   | +1 | javac | 204 | the patch passed |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 113 | the patch passed |
   | +1 | findbugs | 428 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 132 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1136 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5514 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3f5f3b7f3979 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90add05 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/5/testReport/ |
   | Max. process+thread count | 4521 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239892)
Time Spent: 4h 10m  (was: 4h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer 

[jira] [Commented] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836624#comment-16836624
 ] 

Hudson commented on HDDS-1474:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16533 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16533/])
Revert "HDDS-1474. ozone.scm.datanode.id config should take path for a (github: 
rev 2d31ccc1b3dc88870278c48c19ea64ca4f560ee3)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonetrace/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonefs/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config
* (edit) hadoop-hdds/docs/content/Settings.md
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config


> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HDDS-1474:
--

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-05-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836603#comment-16836603
 ] 

Hadoop QA commented on HDFS-14245:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968309/HDFS-14245.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f600beb6fb3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 90add05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239866
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:18
Start Date: 09/May/19 18:18
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #808: Revert 
"HDDS-1474. ozone.scm.datanode.id config should take path for a dir "
URL: https://github.com/apache/hadoop/pull/808
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239866)
Time Spent: 3h 50m  (was: 3h 40m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239863=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239863
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:17
Start Date: 09/May/19 18:17
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #808: Revert 
"HDDS-1474. ozone.scm.datanode.id config should take path for a dir "
URL: https://github.com/apache/hadoop/pull/808
 
 
   Reverts apache/hadoop#792
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239863)
Time Spent: 3h 40m  (was: 3.5h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239861
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:17
Start Date: 09/May/19 18:17
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-491011189
 
 
   Thanks @anuengineer for catching this. I will revert the patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239861)
Time Spent: 3.5h  (was: 3h 20m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239851=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239851
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:05
Start Date: 09/May/19 18:05
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r282601401
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/config-configmap.yaml
 ##
 @@ -20,7 +20,7 @@ metadata:
   name: config
 data:
   OZONE-SITE.XML_hdds.datanode.dir: /data/storage
-  OZONE-SITE.XML_ozone.scm.datanode.id: /data/datanode.id
+  OZONE-SITE.XML_ozone.scm.datanode.id.dir: /data/datanode.id
 
 Review comment:
   Same , /data instead of /data/datanode.id?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239851)
Time Spent: 3h 10m  (was: 3h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239852=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239852
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:05
Start Date: 09/May/19 18:05
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r282601506
 
 

 ##
 File path: hadoop-ozone/dist/src/main/k8s/examples/ozone/config-configmap.yaml
 ##
 @@ -20,7 +20,7 @@ metadata:
   name: config
 data:
   OZONE-SITE.XML_hdds.datanode.dir: /data/storage
-  OZONE-SITE.XML_ozone.scm.datanode.id: /data/datanode.id
+  OZONE-SITE.XML_ozone.scm.datanode.id.dir: /data/datanode.id
   OZONE-SITE.XML_ozone.metadata.dirs: /data/metadata
 
 Review comment:
   and here ..thx
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239852)
Time Spent: 3h 20m  (was: 3h 10m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239849=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239849
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:04
Start Date: 09/May/19 18:04
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r282601151
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml
 ##
 @@ -20,7 +20,7 @@ metadata:
   name: config
 data:
   OZONE-SITE.XML_hdds.datanode.dir: /data/storage
-  OZONE-SITE.XML_ozone.scm.datanode.id: /data/datanode.id
+  OZONE-SITE.XML_ozone.scm.datanode.id.dir: /data/datanode.id
 
 Review comment:
   Sorry to comment so later, is this wrong ? Shouldn't this be /data instead 
of /data/datanode.id ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239849)
Time Spent: 2h 50m  (was: 2h 40m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239850=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239850
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 18:04
Start Date: 09/May/19 18:04
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#discussion_r282601151
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml
 ##
 @@ -20,7 +20,7 @@ metadata:
   name: config
 data:
   OZONE-SITE.XML_hdds.datanode.dir: /data/storage
-  OZONE-SITE.XML_ozone.scm.datanode.id: /data/datanode.id
+  OZONE-SITE.XML_ozone.scm.datanode.id.dir: /data/datanode.id
 
 Review comment:
   Sorry to comment so late, is this wrong ? Shouldn't this be /data instead of 
/data/datanode.id ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239850)
Time Spent: 3h  (was: 2h 50m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836577#comment-16836577
 ] 

Hudson commented on HDDS-1474:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16531 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16531/])
HDDS-1474. ozone.scm.datanode.id config should take path for a dir  
(koneru.hanisha: rev 18cca583ac747f5736777a5aed7e5c02bff1bc0a)
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone-hdfs/docker-config
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/config-configmap.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozonefs/docker-config
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonetrace/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/datanode/InitDatanodeState.java
* (edit) hadoop-ozone/dist/src/main/compose/ozone-om-ha/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
* (edit) hadoop-ozone/dist/src/main/compose/ozones3/docker-config
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
* (edit) hadoop-ozone/dist/src/main/compose/ozonescripts/docker-config
* (edit) hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml
* (edit) hadoop-hdds/docs/content/Settings.md
* (edit) hadoop-ozone/dist/src/main/compose/ozoneblockade/docker-config


> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=239831=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239831
 ]

ASF GitHub Bot logged work on HDDS-1496:


Author: ASF GitHub Bot
Created on: 09/May/19 17:43
Start Date: 09/May/19 17:43
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #804: HDDS-1496. 
Support partial chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-490999735
 
 
   This patch requires more changes (after HDDS-1491 which fixes seek 
operation).
   1. In BlockInputStream#seek(), it is not sufficient to just check if the 
required chunkIndex matches with current buffers chunk index and that the 
buffer has data remaining. Since the buffer might have only a partial chunk, it 
is possible that it does not cover the position seeked. 
   2. In BlockInputStream#readChunkFromContainer(), we should not blindly 
increment the chunkIndex. The last read might have read only a part of the 
chunk and the next read may be required to read from the same chunk again.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239831)
Time Spent: 0.5h  (was: 20m)

> Support partial chunk reads and checksum verification
> -
>
> Key: HDDS-1496
> URL: https://issues.apache.org/jira/browse/HDDS-1496
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk 
> even if we need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of 
> the chunk file is read which is needed by client plus the part of chunk file 
> which is required to verify the checksum.
> For example, lets say the client is reading from index 120 to 450 in the 
> chunk. And let's say checksum is stored for every 100 bytes in the chunk i.e. 
> the first checksum is for bytes from index 0 to 99, the next for bytes from 
> index 100 to 199 and so on. To verify bytes from 120 to 450, we would need to 
> read from bytes 100 to 499 so that checksum verification can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-1474.
--
Resolution: Fixed

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239818
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 17:23
Start Date: 09/May/19 17:23
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-490987959
 
 
   +1. Thanks @vivekratnavel . Test failures are unrelated. I will merge this 
shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239818)
Time Spent: 2.5h  (was: 2h 20m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239819
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 17:23
Start Date: 09/May/19 17:23
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #792: 
HDDS-1474. ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239819)
Time Spent: 2h 40m  (was: 2.5h)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=239815=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239815
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 09/May/19 17:21
Start Date: 09/May/19 17:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #798: 
HDDS-1499. OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282584180
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239815)
Time Spent: 3h 50m  (was: 3h 40m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=239817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239817
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 09/May/19 17:21
Start Date: 09/May/19 17:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-490992025
 
 
   Thank You @hanishakoneru  for the review.
   I have addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239817)
Time Spent: 4h  (was: 3h 50m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1474) "ozone.scm.datanode.id" config should take path for a dir and not a file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1474?focusedWorklogId=239807=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239807
 ]

ASF GitHub Bot logged work on HDDS-1474:


Author: ASF GitHub Bot
Created on: 09/May/19 17:09
Start Date: 09/May/19 17:09
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #792: HDDS-1474. 
ozone.scm.datanode.id config should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-490987959
 
 
   +1. Thanks @vivekratnavel . I will merge this shortly.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239807)
Time Spent: 2h 20m  (was: 2h 10m)

> "ozone.scm.datanode.id" config should take path for a dir and not a file
> 
>
> Key: HDDS-1474
> URL: https://issues.apache.org/jira/browse/HDDS-1474
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, the ozone config "ozone.scm.datanode.id" takes file path as its 
> value. It should instead take dir path as its value and assume a standard 
> filename "datanode.id"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-05-09 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836550#comment-16836550
 ] 

Eric Yang commented on HDDS-1510:
-

I was digging into why ozone script doesn't support symlink properly, and I 
found the root cause is also caused by limitation in classpath construction 
based on classpath file.  We probably want to rethink how the classpath should 
be constructed.  Hadoop partition classpath into share/[component]/lib.  The 
same proven method can be applied for share/hadoop-hdds-server-scm/lib to 
reduce the need to runtime expansion of .classpath file.  This can prevent 
buffer overrun bugs in current shell expansion scripts.

> Classpath files are deployed to the maven repository as pom/jar files
> -
>
> Key: HDDS-1510
> URL: https://issues.apache.org/jira/browse/HDDS-1510
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Elek, Marton
>Priority: Blocker
>
> 1. Classpath files are plain text files which are generatede for each ozone 
> projects. Classpath files are used to defined the classpath of a module (om, 
> scm, etc) based on the maven classpath.
> Example classpath file:
> {code}
> classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
>  
> {code}
> Classpath files are maven artifacts and copied to share/ozone/classpath in 
> the distribution
> 2. 0.4.0 was the first release when we deployed the artifacts to the apache 
> nexus. [~ajayydv] reported the problem that the staging repository can't be 
> closed: INFRA-18344
> It turned out that the classpath files are uploaded with jar extension to the 
> repository. We deleted all the classpath files manually and the repository 
> became closable.
> To avoid similar issues we need to fix this problem and make sure that the 
> classpath files are not uploaded to the repository during a 'mvn deploy' or 
> uploaded but with a good extension.
> ps: I don't know the exact solution yet, but I can imagine that bumping the 
> version of maven deploy plugin can help. Seems to be a bug in the plugin.
> ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836546#comment-16836546
 ] 

Anu Engineer commented on HDDS-1511:


Is this part of the I/O handling work?  [~arpitagarwal]

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836547#comment-16836547
 ] 

Anu Engineer commented on HDDS-1511:


[~sdeka] Not sure how to handle this issue, Thought I will point it out. HDFS 
will be sharing these disks. So tracking free space based on Open containers 
will not address the use case you are mentioning.

bq. Space tracking for open containers will enable adding a safety check during 
container create.
If there is not sufficient free space in the volume, the container create 
operation can be failed.

Perhaps a better investment is to write code that handles the container 
creation or chunk write failure, or come up with an algorithm to look at what 
HDFS is doing too.

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1391) Add ability in OM to serve delta backup files through an API.

2019-05-09 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1391:

Summary: Add ability in OM to serve delta backup files through an API.  
(was: Add ability in OM to perform DB backups and serve delta backup files 
through an API.)

> Add ability in OM to serve delta backup files through an API.
> -
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1391) Add ability in OM to serve delta updates through an API.

2019-05-09 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1391:

Summary: Add ability in OM to serve delta updates through an API.  (was: 
Add ability in OM to serve delta backup files through an API.)

> Add ability in OM to serve delta updates through an API.
> 
>
> Key: HDDS-1391
> URL: https://issues.apache.org/jira/browse/HDDS-1391
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1511:

Attachment: HDDS-1511.000.patch
Status: Patch Available  (was: Open)

unit test code added into existing TestContainerPersistence tests. 

> Space tracking for Open Containers in HDDS Volumes
> --
>
> Key: HDDS-1511
> URL: https://issues.apache.org/jira/browse/HDDS-1511
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1511.000.patch
>
>
> For every HDDS Volume, track the space usage in open containers. Introduce a 
> counter committedBytes in HddsVolume - this counts the remaining space in 
> Open containers until they reach max capacity. The counter is incremented (by 
> container max capacity) for every container create. And decremented (by chunk 
> size) for every chunk write.
> Space tracking for open containers will enable adding a safety check during 
> container create.
> If there is not sufficient free space in the volume, the container create 
> operation can be failed.
> The scope of this jira is to just add the space tracking for Open Containers. 
> Checking for space and failing container create will be introduced in a 
> subsequent jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239790
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 16:31
Start Date: 09/May/19 16:31
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #806: HDDS-1224. 
Restructure code to validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#discussion_r282565511
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -101,9 +103,8 @@ public XceiverClientGrpc(Pipeline pipeline, Configuration 
config) {
 
   /**
* To be used when grpc token is not enabled.
-   * */
-  @Override
-  public void connect() throws Exception {
+   */
+  @Override public void connect() throws Exception {
 
 Review comment:
   Its an unintended change. wIll address in the next commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239790)
Time Spent: 2.5h  (was: 2h 20m)

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1224.000.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239785
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 16:16
Start Date: 09/May/19 16:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #806: HDDS-1224. 
Restructure code to validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#issuecomment-490970181
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 475 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 917 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 307 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 472 | the patch passed |
   | +1 | compile | 252 | the patch passed |
   | +1 | javac | 252 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-hdds: The patch generated 41 new + 0 
unchanged - 0 fixed = 41 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 753 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 250 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 172 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2339 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 8345 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkIndex; locked 96% of 
time  Unsynchronized access at BlockInputStream.java:96% of time  
Unsynchronized access at BlockInputStream.java:[line 303] |
   | Failed junit tests | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey 
|
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/806 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 58cc9b06f2c2 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90add05 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/1/testReport/ |
   | Max. 

[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239768=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239768
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 15:31
Start Date: 09/May/19 15:31
Worklog Time Spent: 10m 
  Work Description: jiwq commented on pull request #806: HDDS-1224. 
Restructure code to validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#discussion_r282540719
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -101,9 +103,8 @@ public XceiverClientGrpc(Pipeline pipeline, Configuration 
config) {
 
   /**
* To be used when grpc token is not enabled.
-   * */
-  @Override
-  public void connect() throws Exception {
+   */
+  @Override public void connect() throws Exception {
 
 Review comment:
   What's the purpose of modifying it?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239768)
Time Spent: 2h 10m  (was: 2h)

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1224.000.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1511) Space tracking for Open Containers in HDDS Volumes

2019-05-09 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1511:
---

 Summary: Space tracking for Open Containers in HDDS Volumes
 Key: HDDS-1511
 URL: https://issues.apache.org/jira/browse/HDDS-1511
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Supratim Deka
Assignee: Supratim Deka


For every HDDS Volume, track the space usage in open containers. Introduce a 
counter committedBytes in HddsVolume - this counts the remaining space in Open 
containers until they reach max capacity. The counter is incremented (by 
container max capacity) for every container create. And decremented (by chunk 
size) for every chunk write.

Space tracking for open containers will enable adding a safety check during 
container create.
If there is not sufficient free space in the volume, the container create 
operation can be failed.

The scope of this jira is to just add the space tracking for Open Containers. 
Checking for space and failing container create will be introduced in a 
subsequent jira.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-05-09 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836453#comment-16836453
 ] 

Erik Krogen commented on HDFS-14245:


I accidentally created my v006 diff off of the wrong starting point. Attaching 
v007 which actually contains the correct diff.

[~vagarychen], I see a test failure for 
{{TestMultipleNNPortQOP.testMultipleNNPortOverwriteDownStream}} that I'm 
confident is unrelated to my change. Can you look into it? It seems that the 
test must be flaky.

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-05-09 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14245:
---
Attachment: HDFS-14245.007.patch

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14245.000.patch, HDFS-14245.001.patch, 
> HDFS-14245.002.patch, HDFS-14245.003.patch, HDFS-14245.004.patch, 
> HDFS-14245.005.patch, HDFS-14245.006.patch, HDFS-14245.007.patch, 
> HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239748
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 13:55
Start Date: 09/May/19 13:55
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #806: HDDS-1224. 
Restructure code to validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806
 
 
   .
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239748)
Time Spent: 1h 50m  (was: 1h 40m)

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1224.000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239749=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239749
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 13:56
Start Date: 09/May/19 13:56
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #793: HDDS-1224. 
Restructure code to validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#issuecomment-490915404
 
 
   Closing this pull request, will create a new one to address the issue.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239749)
Time Spent: 2h  (was: 1h 50m)

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1224.000.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239745=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239745
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 13:47
Start Date: 09/May/19 13:47
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #793: HDDS-1224. 
Restructure code to validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793#discussion_r282492499
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -0,0 +1,344 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.client.io;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.storage.BufferPool;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.protocol.OzoneManagerProtocol;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+
+/**
+ * This class manages the stream entries list and handles block allocation
+ * from OzoneManager.
+ */
+public class BlockOutputStreamEntryPool {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(BlockOutputStreamEntryPool.class);
+
+  private final List streamEntries;
+  private int currentStreamIndex;
+  private final OzoneManagerProtocol omClient;
+  private final OmKeyArgs keyArgs;
+  private final XceiverClientManager xceiverClientManager;
+  private final int chunkSize;
+  private final String requestID;
+  private final long streamBufferFlushSize;
+  private final long streamBufferMaxSize;
+  private final long watchTimeout;
+  private final long blockSize;
+  private final int bytesPerChecksum;
+  private final ContainerProtos.ChecksumType checksumType;
+  private final BufferPool bufferPool;
+  private OmMultipartCommitUploadPartInfo commitUploadPartInfo;
+  private final long openID;
+  private ExcludeList excludeList;
+
+  @SuppressWarnings("parameternumber")
+  public BlockOutputStreamEntryPool(OzoneManagerProtocol omClient,
+  int chunkSize, String requestId, HddsProtos.ReplicationFactor factor,
+  HddsProtos.ReplicationType type, long bufferFlushSize, long 
bufferMaxSize,
+  long size, long watchTimeout, ContainerProtos.ChecksumType checksumType,
+  int bytesPerChecksum, String uploadID, int partNumber,
+  boolean isMultipart, OmKeyInfo info,
+  XceiverClientManager xceiverClientManager, long openID) {
+streamEntries = new ArrayList<>();
+currentStreamIndex = 0;
+this.omClient = omClient;
+this.keyArgs = new OmKeyArgs.Builder().setVolumeName(info.getVolumeName())
+.setBucketName(info.getBucketName()).setKeyName(info.getKeyName())
+.setType(type).setFactor(factor).setDataSize(info.getDataSize())
+.setIsMultipartKey(isMultipart).setMultipartUploadID(uploadID)
+.setMultipartUploadPartNumber(partNumber).build();
+this.xceiverClientManager = xceiverClientManager;
+this.chunkSize = chunkSize;
+this.requestID = requestId;
+this.streamBufferFlushSize = bufferFlushSize;
+this.streamBufferMaxSize = bufferMaxSize;
+this.blockSize = size;
+this.watchTimeout = watchTimeout;
+this.bytesPerChecksum = bytesPerChecksum;
+this.checksumType = checksumType;
+this.openID = openID;
+this.excludeList = new ExcludeList();
+
+Preconditions.checkState(chunkSize > 0);
+Preconditions.checkState(streamBufferFlushSize > 0);
+

[jira] [Work logged] (HDDS-1224) Restructure code to validate the response from server in the Read path

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1224?focusedWorklogId=239746=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239746
 ]

ASF GitHub Bot logged work on HDDS-1224:


Author: ASF GitHub Bot
Created on: 09/May/19 13:49
Start Date: 09/May/19 13:49
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #793: HDDS-1224. 
Restructure code to validate the response from server in the Read path.
URL: https://github.com/apache/hadoop/pull/793
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239746)
Time Spent: 1h 40m  (was: 1.5h)

> Restructure code to validate the response from server in the Read path
> --
>
> Key: HDDS-1224
> URL: https://issues.apache.org/jira/browse/HDDS-1224
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1224.000.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In the read path, the validation of the response while reading the data from 
> the datanodes happen in XceiverClientGrpc as well as additional  Checksum 
> verification happens in Ozone client to verify the read chunk response. The 
> aim of this Jira is to modify the function call to take a validator function 
> as a part of reading data so all validation can happen in a single unified 
> place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1509:
-
Labels: pull-request-available  (was: )

> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1509?focusedWorklogId=239743=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239743
 ]

ASF GitHub Bot logged work on HDDS-1509:


Author: ASF GitHub Bot
Created on: 09/May/19 13:37
Start Date: 09/May/19 13:37
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #805: HDDS-1509. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
URL: https://github.com/apache/hadoop/pull/805
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239743)
Time Spent: 10m
Remaining Estimate: 0h

> TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
> 
>
> Key: HDDS-1509
> URL: https://issues.apache.org/jira/browse/HDDS-1509
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The test fails because, the test expects a exception after 2 datanodes 
> failures to be of type RaftRetryFailureException. But it might happen that, 
> the pipeline gets destroyed quickly then actual write executes over Ratis, 
> hence it will fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1507) Provide example k8s deployment files for secure ozone setup

2019-05-09 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1507:
---
Description: 
We need documentation and example k8s files to demonstrate how to start secure 
ozone cluster inside kubernetes.

The biggest challenge is the creation of the keytab files. There are multiple 
solutions for that. For example in the ozonesecure docker-compose cluster we 
have a simple rest endpoints and the containers download the required keytabs 
files from that specific (unsecure) rest endpoint.

While this is a very good and flexible solution I prefer to use something which 
is more production-like, which can be used as an example how ozone should be 
used in production.

A dynamic approach can be achieved with Hasicorp Vault which is a _secure_ key 
value store. With a huge amount of configuration the pre-generated keytabs can 
be stored to make it available for all the containers.

But I prefer to use a simple solution here. Let's say that the keytab 
generation is the responsibility of the user. The generated keytab files can be 
stored in k8s configmap oboject and can be easily mounted.

While this aproach requires some manual work, it can be used for production.




  was:We need documentation and example k8s files to demonstrate how to start 
secure ozone cluster inside kubernetes.


> Provide example k8s deployment files for secure ozone setup
> ---
>
> Key: HDDS-1507
> URL: https://issues.apache.org/jira/browse/HDDS-1507
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> We need documentation and example k8s files to demonstrate how to start 
> secure ozone cluster inside kubernetes.
> The biggest challenge is the creation of the keytab files. There are multiple 
> solutions for that. For example in the ozonesecure docker-compose cluster we 
> have a simple rest endpoints and the containers download the required keytabs 
> files from that specific (unsecure) rest endpoint.
> While this is a very good and flexible solution I prefer to use something 
> which is more production-like, which can be used as an example how ozone 
> should be used in production.
> A dynamic approach can be achieved with Hasicorp Vault which is a _secure_ 
> key value store. With a huge amount of configuration the pre-generated 
> keytabs can be stored to make it available for all the containers.
> But I prefer to use a simple solution here. Let's say that the keytab 
> generation is the responsibility of the user. The generated keytab files can 
> be stored in k8s configmap oboject and can be easily mounted.
> While this aproach requires some manual work, it can be used for production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1510) Classpath files are deployed to the maven repository as pom/jar files

2019-05-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1510:
--

 Summary: Classpath files are deployed to the maven repository as 
pom/jar files
 Key: HDDS-1510
 URL: https://issues.apache.org/jira/browse/HDDS-1510
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Reporter: Elek, Marton


1. Classpath files are plain text files which are generatede for each ozone 
projects. Classpath files are used to defined the classpath of a module (om, 
scm, etc) based on the maven classpath.

Example classpath file:

{code}
classpath=$HDDS_LIB_JARS_DIR/kerb-simplekdc-1.0.1.jar:$HDDS_LIB_JARS_DIR/hk2-utils-2.5.0.jar:$HDDS_LIB_JARS_DIR/jackson-core-2.9.5.jar:$HDDS_LIB_JARS_DIR/ratis-netty-0.4.0-fe2b15d-SNAPSHOT.jar:$HDDS_LIB_JARS_DIR/protobuf-java-2.5.0.jar:...
 
{code}

Classpath files are maven artifacts and copied to share/ozone/classpath in the 
distribution

2. 0.4.0 was the first release when we deployed the artifacts to the apache 
nexus. [~ajayydv] reported the problem that the staging repository can't be 
closed: INFRA-18344

It turned out that the classpath files are uploaded with jar extension to the 
repository. We deleted all the classpath files manually and the repository 
became closable.

To avoid similar issues we need to fix this problem and make sure that the 
classpath files are not uploaded to the repository during a 'mvn deploy' or 
uploaded but with a good extension.

ps: I don't know the exact solution yet, but I can imagine that bumping the 
version of maven deploy plugin can help. Seems to be a bug in the plugin.

ps2: This is blocker as we need to fix it before the next release



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1509) TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-09 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1509:
-

 Summary: TestBlockOutputStreamWithFailures#test2DatanodesFailure 
fails intermittently
 Key: HDDS-1509
 URL: https://issues.apache.org/jira/browse/HDDS-1509
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


The test fails because, the test expects a exception after 2 datanodes failures 
to be of type RaftRetryFailureException. But it might happen that, the pipeline 
gets destroyed quickly then actual write executes over Ratis, hence it will 
fail with GroupMismatchhException in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1500) Allocate block failures in client should print exception trace.

2019-05-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836269#comment-16836269
 ] 

Hudson commented on HDDS-1500:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16529/])
HDDS-1500. Allocate block failures in client should print exception (msingh: 
rev 25951255ce9a1ce3b58fd8122cd37e895b8f80d6)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java


> Allocate block failures in client should print exception trace.
> ---
>
> Key: HDDS-1500
> URL: https://issues.apache.org/jira/browse/HDDS-1500
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The following error is seen intermittently in the Ozone client logs while 
> writing large keys. We need to log the entire exception trace to find out 
> more about the failure.
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1491) Ozone KeyInputStream seek() should not read the chunk file

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1491?focusedWorklogId=239690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239690
 ]

ASF GitHub Bot logged work on HDDS-1491:


Author: ASF GitHub Bot
Created on: 09/May/19 10:21
Start Date: 09/May/19 10:21
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #795: HDDS-1491. Ozone 
KeyInputStream seek() should not read the chunk file.
URL: https://github.com/apache/hadoop/pull/795#issuecomment-490846871
 
 
   Thanks @hanishakoneru for working on this. The patch overall looks good to 
me. Some minor comments:
   1. BlockInputStream.java : 81 : update javadoc
   2. BlockInputStream.java : 392 : correct the comment
   3. It would be good to add some metrics similar to xceiverClientRatis in 
XceiverClientGrpc and add some tests to verify that no reads are happening as a 
part of seek using the metrics.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239690)
Time Spent: 50m  (was: 40m)

> Ozone KeyInputStream seek() should not read the chunk file
> --
>
> Key: HDDS-1491
> URL: https://issues.apache.org/jira/browse/HDDS-1491
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> KeyInputStream#seek() calls BlockInputStream#seek() to adjust the buffer 
> position to the seeked position. As part of the seek operation, the whole 
> chunk is read from the container and stored in the buffer so that the buffer 
> position can be advanced to the seeked position. 
> We should not read from disk on a seek() operation. Instead, for a read 
> operation, when the chunk file is read and put in the buffer, at that time, 
> we can advance the buffer position to the previously seeked position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1500) Allocate block failures in client should print exception trace.

2019-05-09 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-1500.
-
Resolution: Fixed

Thanks [~avijayan] for working on this. I have committed this to trunk.

> Allocate block failures in client should print exception trace.
> ---
>
> Key: HDDS-1500
> URL: https://issues.apache.org/jira/browse/HDDS-1500
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The following error is seen intermittently in the Ozone client logs while 
> writing large keys. We need to log the entire exception trace to find out 
> more about the failure.
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1500) Allocate block failures in client should print exception trace.

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1500?focusedWorklogId=239687=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239687
 ]

ASF GitHub Bot logged work on HDDS-1500:


Author: ASF GitHub Bot
Created on: 09/May/19 10:08
Start Date: 09/May/19 10:08
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #801: HDDS-1500 : 
Allocate block failures in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239687)
Time Spent: 1h 40m  (was: 1.5h)

> Allocate block failures in client should print exception trace.
> ---
>
> Key: HDDS-1500
> URL: https://issues.apache.org/jira/browse/HDDS-1500
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The following error is seen intermittently in the Ozone client logs while 
> writing large keys. We need to log the entire exception trace to find out 
> more about the failure.
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1500) Allocate block failures in client should print exception trace.

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1500?focusedWorklogId=239686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239686
 ]

ASF GitHub Bot logged work on HDDS-1500:


Author: ASF GitHub Bot
Created on: 09/May/19 10:06
Start Date: 09/May/19 10:06
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #801: HDDS-1500 : Allocate 
block failures in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490842428
 
 
   Thanks for the updated patch @avijayanhwx . I will commit this now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239686)
Time Spent: 1.5h  (was: 1h 20m)

> Allocate block failures in client should print exception trace.
> ---
>
> Key: HDDS-1500
> URL: https://issues.apache.org/jira/browse/HDDS-1500
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The following error is seen intermittently in the Ozone client logs while 
> writing large keys. We need to log the entire exception trace to find out 
> more about the failure.
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1502) Add metrics for Ozone Ratis performance

2019-05-09 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-1502:
-

Assignee: Shashikant Banerjee  (was: Mukul Kumar Singh)

> Add metrics for Ozone Ratis performance
> ---
>
> Key: HDDS-1502
> URL: https://issues.apache.org/jira/browse/HDDS-1502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>
> This jira will add some metrics for Ratis pipeline performance
> a) number of chunks written per seconds
> b) number of bytes written per second
> c) number of chunk/bytes missed during read State Machine data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1485) Propogate correct errors from DN to Ozone client.

2019-05-09 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1485:
---

Assignee: Supratim Deka  (was: Mukul Kumar Singh)

> Propogate correct errors from DN to Ozone client.
> -
>
> Key: HDDS-1485
> URL: https://issues.apache.org/jira/browse/HDDS-1485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Supratim Deka
>Priority: Blocker
>
> *Environment*
> 26 node physical cluster.
> All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error. 
> {code}
> 19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 424 does not exist
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1485) Propogate correct errors from DN to Ozone client.

2019-05-09 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1485:
---

Assignee: Mukul Kumar Singh  (was: Shashikant Banerjee)

> Propogate correct errors from DN to Ozone client.
> -
>
> Key: HDDS-1485
> URL: https://issues.apache.org/jira/browse/HDDS-1485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Mukul Kumar Singh
>Priority: Blocker
>
> *Environment*
> 26 node physical cluster.
> All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error. 
> {code}
> 19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 424 does not exist
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1485) Propogate correct errors from DN to Ozone client.

2019-05-09 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1485:

Summary: Propogate correct errors from DN to Ozone client.  (was: Ozone 
writes fail when single threaded client writes 100MB files repeatedly. )

> Propogate correct errors from DN to Ozone client.
> -
>
> Key: HDDS-1485
> URL: https://issues.apache.org/jira/browse/HDDS-1485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Blocker
>
> *Environment*
> 26 node physical cluster.
> All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error. 
> {code}
> 19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 424 does not exist
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1485) Ozone writes fail when single threaded client writes 100MB files repeatedly.

2019-05-09 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836252#comment-16836252
 ] 

Mukul Kumar Singh commented on HDDS-1485:
-

The problem here is because of the open file limit being reached on the 
datanode. The container creation(create directory) failed because of max number 
of open files and that fails the writeChunk with ContainerNotFoundException. We 
should improve the error logging for this error though.



> Ozone writes fail when single threaded client writes 100MB files repeatedly. 
> -
>
> Key: HDDS-1485
> URL: https://issues.apache.org/jira/browse/HDDS-1485
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Blocker
>
> *Environment*
> 26 node physical cluster.
> All Datanodes are up and running.
> Client attempting to write 1600 x 100MB files using the FsStress utility 
> (https://github.com/arp7/FsPerfTest) fails with the following error. 
> {code}
> 19/05/02 09:58:49 ERROR storage.BlockOutputStream: Unexpected Storage 
> Container Exception:
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  ContainerID 424 does not exist
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:539)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:616)
> at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> It looks like a corruption in the container metadata. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14483) Backport HDFS-3246 ByteBuffer pread interface to branch-2.8.x

2019-05-09 Thread Zheng Hu (JIRA)
Zheng Hu created HDFS-14483:
---

 Summary: Backport HDFS-3246 ByteBuffer pread interface to 
branch-2.8.x
 Key: HDFS-14483
 URL: https://issues.apache.org/jira/browse/HDFS-14483
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Zheng Hu
Assignee: Zheng Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1508) Provide example k8s deployment files for the new CSI server

2019-05-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1508:
--

 Summary: Provide example k8s deployment files for the new CSI 
server
 Key: HDDS-1508
 URL: https://issues.apache.org/jira/browse/HDDS-1508
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


Issue HDDS-1382 introduced a new internal CSI server. We should provide example 
deployment files to make it easy to deploy it to any kubernetes cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1507) Provide example k8s deployment files for secure ozone setup

2019-05-09 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1507 started by Elek, Marton.
--
> Provide example k8s deployment files for secure ozone setup
> ---
>
> Key: HDDS-1507
> URL: https://issues.apache.org/jira/browse/HDDS-1507
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> We need documentation and example k8s files to demonstrate how to start 
> secure ozone cluster inside kubernetes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1507) Provide example k8s deployment files for secure ozone setup

2019-05-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1507:
--

 Summary: Provide example k8s deployment files for secure ozone 
setup
 Key: HDDS-1507
 URL: https://issues.apache.org/jira/browse/HDDS-1507
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


We need documentation and example k8s files to demonstrate how to start secure 
ozone cluster inside kubernetes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1506) Ozone Manager can't be started with existing reverse dns

2019-05-09 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1506:
--

 Summary: Ozone Manager can't be started with existing reverse dns
 Key: HDDS-1506
 URL: https://issues.apache.org/jira/browse/HDDS-1506
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Affects Versions: 0.4.0
Reporter: Elek, Marton
Assignee: Elek, Marton


I tried to start current ozone in kubernetes and ozone manager initialization 
is failed with the following error:

{code}
2019-05-09 08:40:23 INFO  OzoneManager:51 - registered UNIX signal handlers for 
[TERM, HUP, INT]
2019-05-09 08:40:23 WARN  ScmUtils:63 - ozone.om.db.dirs is not configured. We 
recommend adding this setting. Falling back to ozone.metadata.dirs instead.
2019-05-09 08:40:24 INFO  OzoneManager:1029 - Initializing secure OzoneManager.
2019-05-09 08:40:24 ERROR OMCertificateClient:209 - Default certificate serial 
id is not set. Can't locate the default certificate for this client.
2019-05-09 08:40:24 INFO  OMCertificateClient:588 - Certificate client init 
case: 0
2019-05-09 08:40:24 INFO  OMCertificateClient:55 - Creating keypair for client 
as keypair and certificate not found.
2019-05-09 08:40:24 INFO  OzoneManager:1035 - Init response: GETCERT
2019-05-09 08:40:24 INFO  OzoneSecurityUtil:103 - Adding 
ip:192.168.11.208,host:om-0.om.default.svc.cluster.local
2019-05-09 08:40:24 INFO  OzoneSecurityUtil:107 - ip:127.0.0.1,host:localhost 
not returned.
2019-05-09 08:40:24 ERROR OzoneManager:1421 - Incorrect om rpc address. 
omRpcAdd:om-0.om:9862
2019-05-09 08:40:24 ERROR OzoneManager:888 - Failed to start the OzoneManager.
java.lang.RuntimeException: Can't get SCM signed certificate. omRpcAdd: 
om-0.om:9862
at 
org.apache.hadoop.ozone.om.OzoneManager.getSCMSignedCert(OzoneManager.java:1422)
at 
org.apache.hadoop.ozone.om.OzoneManager.initializeSecurity(OzoneManager.java:1041)
at org.apache.hadoop.ozone.om.OzoneManager.omInit(OzoneManager.java:994)
at 
org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:951)
at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:882)
2019-05-09 08:40:24 INFO  ExitUtil:210 - Exiting with status 1: 
java.lang.RuntimeException: Can't get SCM signed certificate. omRpcAdd: 
om-0.om:9862
2019-05-09 08:40:24 INFO  OzoneManager:51 - SHUTDOWN_MSG: 
{code}

The root of the problem is this method in OzoneManager:

{code}
 private static void getSCMSignedCert(CertificateClient client,
  OzoneConfiguration config, OMStorage omStore) throws IOException {
   ...
omRpcAdd = OmUtils.getOmAddress(config);
if (omRpcAdd == null || omRpcAdd.getAddress() == null) {
  LOG.error("Incorrect om rpc add-ress. omRpcAdd:{}", omRpcAdd);
  throw new RuntimeException("Can't get SCM signed certificate. " +
  "omRpcAdd: " + omRpcAdd);
}
{code}

In My case omRpcAdd.getAddress() seems to be empty at the time of 
initialization as the reverse dns entry will be available only when om is 
started. Which is a classic chicken/egg problem: We need reverse dns for 
initialization but reverse dns entry is added when the container is started. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1493) Download and Import Container replicator fails.

2019-05-09 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836156#comment-16836156
 ] 

Elek, Marton commented on HDDS-1493:


{code}
2019-05-07 21:55:34,079 [ContainerReplicationThread-39] INFO  
(DownloadAndImportReplicator.java:115) - Starting replication of container 
316 from [b4531ebf-524a-4d15-9664-7f4c9d575165{ip: 10.17.200.26, host: 
vb0916.halxg.cloudera.com, certSerialId: null}, 
9ad24543-defe-4d16-bcfd-83156fcb51fb{ip: 10.17.200.25, host: 
vb0915.halxg.cloudera.com, certSerialId: null}]
2019-05-07 21:55:49,799 [grpc-default-executor-43] INFO  
(GrpcReplicationClient.java:160) - Container is downloaded to 
/tmp/container-copy/container-316.tar.gz
2019-05-07 21:55:49,800 [ContainerReplicationThread-39] INFO  
(DownloadAndImportReplicator.java:126) - Container 316 is downloaded, 
starting to import.
2019-05-07 21:55:55,553 [ContainerReplicationThread-39] ERROR 
(DownloadAndImportReplicator.java:95) - Can't import the downloaded 
container data id=316
java.io.IOException: Container descriptor is missing from the container archive.
at 
org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.unpackContainerDescriptor(TarContainerPacker.java:208)
at 
org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator.importContainer(DownloadAndImportReplicator.java:74)
at 
org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator.replicate(DownloadAndImportReplicator.java:128)
at 
org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$TaskRunner.run(ReplicationSupervisor.java:115)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-05-07 21:55:55,554 [ContainerReplicationThread-39] INFO  
(DownloadAndImportReplicator.java:129) - Container 316 is replicated 
successfully
2019-05-07 21:55:55,555 [ContainerReplicationThread-39] INFO  
(ReplicationSupervisor.java:122) - Container 316 is replicated.
2019-05-07 21:55:55,555 [ContainerReplicationThread-39] INFO  
(DownloadAndImportReplicator.java:115) - Starting replication 
{code}

Containers are copied as tar files. Some parts of the tar file can be missing 
(flush problem?).

As far as I remember the unit test which checks this functionality has a 
similar intermittent failure.



> Download and Import Container replicator fails.
> ---
>
> Key: HDDS-1493
> URL: https://issues.apache.org/jira/browse/HDDS-1493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Nanda kumar
>Priority: Major
> Attachments: ozone.log
>
>
> While running batch jobs (16 threads writing a lot of 10MB+ files), the 
> following error is seen in the SCM logs.
> {code}
> ERROR  - Can't import the downloaded container data id=317
> {code}
> It is unclear from the logs why this happens. Needs more investigation to 
> find the root cause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1500) Allocate block failures in client should print exception trace.

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1500?focusedWorklogId=239637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239637
 ]

ASF GitHub Bot logged work on HDDS-1500:


Author: ASF GitHub Bot
Created on: 09/May/19 07:21
Start Date: 09/May/19 07:21
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #801: HDDS-1500 : Allocate 
block failures in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490777899
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239637)
Time Spent: 1h 20m  (was: 1h 10m)

> Allocate block failures in client should print exception trace.
> ---
>
> Key: HDDS-1500
> URL: https://issues.apache.org/jira/browse/HDDS-1500
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The following error is seen intermittently in the Ozone client logs while 
> writing large keys. We need to log the entire exception trace to find out 
> more about the failure.
> {code}
> 19/04/22 10:13:32 ERROR io.KeyOutputStream: Try to allocate more blocks for 
> write failed, already allocated 0 blocks for this write.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1505) Remove "ozone.enabled" parameter from ozone configs

2019-05-09 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836155#comment-16836155
 ] 

Elek, Marton commented on HDDS-1505:


Sounds reasonable. ozone.enabled was introduced when ozone packaged together 
with hdfs/hadoop tar file to be sure that ozone is not started unintentionally. 
As we have a separated distribution for ozone we don't need this additional 
safety line (IMHO)

> Remove "ozone.enabled" parameter from ozone configs
> ---
>
> Key: HDDS-1505
> URL: https://issues.apache.org/jira/browse/HDDS-1505
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Priority: Minor
>
> Remove "ozone.enabled" config as it is no longer needed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1441) Remove usage of getRetryFailureException

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1441?focusedWorklogId=239626=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239626
 ]

ASF GitHub Bot logged work on HDDS-1441:


Author: ASF GitHub Bot
Created on: 09/May/19 06:47
Start Date: 09/May/19 06:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #745: HDDS-1441. Remove 
usage of getRetryFailureException. (swagle)
URL: https://github.com/apache/hadoop/pull/745#issuecomment-490766138
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 409 | trunk passed |
   | +1 | compile | 201 | trunk passed |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 822 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 233 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 412 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 20 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 16 | The patch fails to run checkstyle in hadoop-hdds |
   | -0 | checkstyle | 15 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 17 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3203 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/745 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 4d958e09fd2f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f257497 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-745/out/maven-patch-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-745/8/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-745/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 

[jira] [Assigned] (HDFS-8631) WebHDFS : Support get/setQuota

2019-05-09 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore reassigned HDFS-8631:


Assignee: Xue Liu  (was: Surendra Singh Lilhore)

> WebHDFS : Support get/setQuota
> --
>
> Key: HDFS-8631
> URL: https://issues.apache.org/jira/browse/HDFS-8631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.2
>Reporter: nijel
>Assignee: Xue Liu
>Priority: Major
> Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, 
> HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, 
> HDFS-8631-006.patch
>
>
> User is able do quota management from filesystem object. Same operation can 
> be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-09 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=239613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-239613
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 09/May/19 06:32
Start Date: 09/May/19 06:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-490762355
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 393 | trunk passed |
   | +1 | compile | 199 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | trunk passed |
   | 0 | spotbugs | 236 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 409 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 411 | the patch passed |
   | +1 | compile | 207 | the patch passed |
   | +1 | javac | 207 | the patch passed |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 656 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | the patch passed |
   | +1 | findbugs | 429 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 135 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1247 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5576 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 235d56cf079b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c5fa2e |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/4/testReport/ |
   | Max. process+thread count | 5055 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-798/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 239613)
Time Spent: 3h 40m  (was: 3.5h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning 

[jira] [Commented] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-05-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836118#comment-16836118
 ] 

Hadoop QA commented on HDFS-14356:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 3 new + 174 unchanged 
- 1 fixed = 177 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m  6s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}292m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |