[jira] [Work logged] (HDDS-1487) Bootstrap React framework for Recon UI

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1487?focusedWorklogId=244822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244822
 ]

ASF GitHub Bot logged work on HDDS-1487:


Author: ASF GitHub Bot
Created on: 20/May/19 05:49
Start Date: 20/May/19 05:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #831: HDDS-1487. 
Bootstrap React framework for Recon UI
URL: https://github.com/apache/hadoop/pull/831#issuecomment-493847154
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 399 | trunk passed |
   | +1 | compile | 197 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 962 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 114 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 506 | the patch passed |
   | -1 | jshint | 2527 | The patch generated 16 new + 0 unchanged - 7061 fixed 
= 16 total (was 7061) |
   | +1 | compile | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 640 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 139 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1514 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7420 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/831 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml jshint |
   | uname | Linux ca20bb919b7c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0d1d7c8 |
   | Default Java | 1.8.0_212 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/artifact/out/diff-patch-jshint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/testReport/ |
   | Max. process+thread count | 4272 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/2/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244822)
Time Spent: 40m  (was: 0.5h)

> Bootstrap React framework for Recon UI
> --
>
> Key: HDDS-1487
> URL: https://issues.apache.org/jira/browse/HDDS-1487
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 

[jira] [Updated] (HDDS-1560) RejectedExecutionException on datanode after shutting it down

2019-05-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1560:

Labels: MiniOzoneChaosCluster  (was: )

> RejectedExecutionException on datanode after shutting it down
> -
>
> Key: HDDS-1560
> URL: https://issues.apache.org/jira/browse/HDDS-1560
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> RejectedExecutionException on datanode after shutting it down
> {code}
> 2019-05-20 00:38:52,757 ERROR statemachine.DatanodeStateMachine 
> (DatanodeStateMachine.java:start(199)) - Unable to finish the execution.
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ExecutorCompletionService$QueueingFuture@74b926e9 
> rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@15e1f6
> 9d[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed 
> tasks = 90]
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
> at 
> java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.execute(RunningDatanodeState.java:90)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:375)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:186)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:349)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1560) RejectedExecutionException on datanode after shutting it down

2019-05-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1560:
---

 Summary: RejectedExecutionException on datanode after shutting it 
down
 Key: HDDS-1560
 URL: https://issues.apache.org/jira/browse/HDDS-1560
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh


RejectedExecutionException on datanode after shutting it down

{code}
2019-05-20 00:38:52,757 ERROR statemachine.DatanodeStateMachine 
(DatanodeStateMachine.java:start(199)) - Unable to finish the execution.
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ExecutorCompletionService$QueueingFuture@74b926e9 rejected 
from org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@15e1f6
9d[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed 
tasks = 90]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at 
java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
at 
org.apache.hadoop.ozone.container.common.states.datanode.RunningDatanodeState.execute(RunningDatanodeState.java:90)
at 
org.apache.hadoop.ozone.container.common.statemachine.StateContext.execute(StateContext.java:375)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:186)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:349)
at java.lang.Thread.run(Thread.java:748)

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1558) IllegalArgumentException while processing container Reports

2019-05-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1558:

Labels: MiniOzoneChaosCluster  (was: )

> IllegalArgumentException while processing container Reports
> ---
>
> Key: HDDS-1558
> URL: https://issues.apache.org/jira/browse/HDDS-1558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> IllegalArgumentException while processing container Reports
> {code}
> 2019-05-19 23:15:04,137 ERROR events.SingleThreadExecutor 
> (SingleThreadExecutor.java:lambda$onMessage$1(88)) - Error on execution 
> message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@1a117ebc
> java.lang.IllegalArgumentException
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
> at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.updateContainerState(AbstractContainerReportHandler.java:178)
> at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.processContainerReplica(AbstractContainerReportHandler.java:85)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:124)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:46)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2019-05-19 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843654#comment-16843654
 ] 

Akira Ajisaka commented on HDFS-13255:
--

Thanks [~ayushtkn] for the comment.

bq. Does restricting this behavior has any advantage?

IMO, there is no advantage to restrict this behavior.

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wu Weiwei
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HDFS-13255-HDFS-13891-wip-001.patch
>
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-05-19 Thread Santosh Marella (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843647#comment-16843647
 ] 

Santosh Marella commented on HDFS-12914:


[~jojochuang] - We might have hit this issue recently and some initial 
investigation seems to lead to this issue described by [~daryn] in the 
description. I'm new to this area, but from what I have learnt so far, it seems 
that HDFS-14314 fixes a related, but slightly different scenario. Please see 
below.

 

We have a DN that has 12 disks. When we restarted a standby NN, the DN 
registers itself, gets a lease for FBR and sends a FBR containing 12 reports, 
one for each disk. However, only 3 of them got processed and the 9 aren't 
processed, as the lease had expired before processing the 4th report. This 
essentially meant the FBR was partially processed, and, in our case, this 
*might* be one of the reasons it's taking too long for the NN to come out of 
safe mode (as the safe block count is taking too long to reach the threshold 
due to partial FBR processing).

 

Some raw notes that I've collected investigating this issue. Sorry for being 
verbose, but hope it helps everyone. We are on 2.9.2 version of Hadoop.

Dug through the logs and observed this for a DN for which only 3 out of 12 
reports were processed. The DN registered itself with the NN, then sent a FBR 
that contained 12 reports (one for each disk). The NN processed 3 of them 
(indicated by *processing first storage report*  and *processing time* entries 
in the log statements). However, for the 9 remaining reports, it prints *lease 
xxx is not valid for DN* messages.  

 
{code:java}
2019-05-16 15:15:35,028 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
registerDatanode: from DatanodeRegistration(10.54.63.120:50010, 
datanodeUuid=7493442a-c552-43f4-b6bd-728be292f66d, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-f4a0a2ae-9e3d-41d5-b98a-e0e77ed0249b;nsid=682930173;c=1406912757005)
 storage 7493442a-c552-43f4-b6bd-728be292f66d
2019-05-16 15:15:35,028 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: 
Registered DN 7493442a-c552-43f4-b6bd-728be292f66d (10.54.63.120:50010).
2019-05-16 15:31:11,578 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: Processing first storage report for 
DS-3e8d8352-ecc9-45cb-a39b-86f10d8aa386 from datanode 
7493442a-c552-43f4-b6bd-728be292f66d
2019-05-16 15:31:11,941 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: from storage DS-3e8d8352-ecc9-45cb-a39b-86f10d8aa386 node 
DatanodeRegistration(10.54.63.120:50010, 
datanodeUuid=7493442a-c552-43f4-b6bd-728be292f66d, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-f4a0a2ae-9e3d-41d5-b98a-e0e77ed0249b;nsid=682930173;c=1406912757005),
 blocks: 12690, hasStaleStorage: true, processing time: 363 msecs, 
invalidatedBlocks: 0
2019-05-16 15:31:17,496 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: Processing first storage report for 
DS-600c8ca1-3f99-41fc-a784-f663b928fe21 from datanode 
7493442a-c552-43f4-b6bd-728be292f66d
2019-05-16 15:31:17,851 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: from storage DS-600c8ca1-3f99-41fc-a784-f663b928fe21 node 
DatanodeRegistration(10.54.63.120:50010, 
datanodeUuid=7493442a-c552-43f4-b6bd-728be292f66d, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-f4a0a2ae-9e3d-41d5-b98a-e0e77ed0249b;nsid=682930173;c=1406912757005),
 blocks: 12670, hasStaleStorage: true, processing time: 355 msecs, 
invalidatedBlocks: 0
2019-05-16 15:31:23,465 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: Processing first storage report for 
DS-3e7dc4c5-ab4e-40d1-8f32-64fe28081f94 from datanode 
7493442a-c552-43f4-b6bd-728be292f66d
2019-05-16 15:31:23,821 INFO BlockStateChange: BLOCK* processReport 
0xb4fb52822c9e3f03: from storage DS-3e7dc4c5-ab4e-40d1-8f32-64fe28081f94 node 
DatanodeRegistration(10.54.63.120:50010, 
datanodeUuid=7493442a-c552-43f4-b6bd-728be292f66d, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-f4a0a2ae-9e3d-41d5-b98a-e0e77ed0249b;nsid=682930173;c=1406912757005),
 blocks: 12698, hasStaleStorage: true, processing time: 356 msecs, 
invalidatedBlocks: 0
2019-05-16 15:31:29,419 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: Removing 
expired block report lease 0xfd013f0084d0ed2d for DN 
7493442a-c552-43f4-b6bd-728be292f66d.
2019-05-16 15:31:29,419 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: BR lease 
0xfd013f0084d0ed2d is not valid for DN 7493442a-c552-43f4-b6bd-728be292f66d, 
because the lease has expired.
2019-05-16 15:31:35,891 WARN 
org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager: BR lease 
0xfd013f0084d0ed2d is not valid for DN 7493442a-c552-43f4-b6bd-728be292f66d, 
because the DN is 

[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843638#comment-16843638
 ] 

Ayush Saxena commented on HDFS-14486:
-

Test Failures Seems Not Related!!!

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch, HDFS-14486-02.patch, 
> HDFS-14486-03.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244790
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285423185
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
 
 Review comment:
   Can you add a short description of this flag?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244790)
Time Spent: 4h 20m  (was: 4h 10m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244789
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285423395
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
 
 Review comment:
   We should not assume we are using RocksDB here. We may need to update the 
function name to addToDbBatch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244789)
Time Spent: 4h 20m  (was: 4h 10m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244791
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285422717
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
 
 Review comment:
   @bharatviswa504 can you add a slightly more descriptive comment. Maybe 3-4 
lines on the purpose of the double buffer and some high level implementation 
note.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244791)
Time Spent: 4.5h  (was: 4h 20m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244787
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285423558
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
 
 Review comment:
   Let's discuss the synchronization. It probably needs to be fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244787)
Time Spent: 4h 10m  (was: 4h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244786
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285419063
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
 
 Review comment:
   Agreed, this should be a debug log.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244788
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 04:04
Start Date: 20/May/19 04:04
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285423337
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
 
 Review comment:
   Looks like this is spinning the CPU in a tight loop. It should wait until 
the current flush completes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244788)
Time Spent: 4h 10m  (was: 4h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. 

[jira] [Work logged] (HDDS-1487) Bootstrap React framework for Recon UI

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1487?focusedWorklogId=244784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244784
 ]

ASF GitHub Bot logged work on HDDS-1487:


Author: ASF GitHub Bot
Created on: 20/May/19 03:47
Start Date: 20/May/19 03:47
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #831: HDDS-1487. 
Bootstrap React framework for Recon UI
URL: https://github.com/apache/hadoop/pull/831#issuecomment-493829904
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244784)
Time Spent: 0.5h  (was: 20m)

> Bootstrap React framework for Recon UI
> --
>
> Key: HDDS-1487
> URL: https://issues.apache.org/jira/browse/HDDS-1487
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Bootstrap React with Typescript, Ant, LESS and other necessary libraries for 
> Recon UI. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244780=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244780
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 03:38
Start Date: 20/May/19 03:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285420994
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketDeleteResponse.java
 ##
 @@ -0,0 +1,57 @@
+/**
 
 Review comment:
   Added them for testing purpose to test DoubleBuffer Implementation.
   Actual usage and the classes will be changed further according to needs of 
HA in further Jira.
   
   In this PR #827 handled bucket related operations.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244780)
Time Spent: 4h  (was: 3h 50m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244779=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244779
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 03:37
Start Date: 20/May/19 03:37
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #810: 
HDDS-1512. Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285420979
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketCreateResponse.java
 ##
 @@ -0,0 +1,52 @@
+/**
 
 Review comment:
   Added them for testing purpose to test DoubleBuffer Implementation.
   Actual usage and the classes will be changed further according to needs of 
HA in further Jira.
   
   In this PR https://github.com/apache/hadoop/pull/827 handled bucket related 
operations.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244779)
Time Spent: 3h 50m  (was: 3h 40m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1559) Include committedBytes to determine Out of Space in VolumeChoosingPolicy

2019-05-19 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1559:
---

 Summary: Include committedBytes to determine Out of Space in 
VolumeChoosingPolicy
 Key: HDDS-1559
 URL: https://issues.apache.org/jira/browse/HDDS-1559
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Supratim Deka
Assignee: Supratim Deka


This is a follow-up from HDDS-1511 and HDDS-1535

Currently  when creating a new Container, the DN invokes 
RoundRobinVolumeChoosingPolicy:chooseVolume(). This routine checks for (volume 
available space > container max size). If no eligible volume is found, the 
policy throws a DiskOutOfSpaceException. This is the current behaviour.

However, the computation of available space does not take into consideration 
the space
that is going to be consumed by writes to existing containers which are still 
Open and accepting chunk writes.

This Jira proposes to enhance the space availability check in chooseVolume by 
inclusion of committed space(committedBytes in HddsVolume) in the equation.

The handling/management of the exception in Ratis will not be modified in this 
Jira. That will be scoped separately as part of Datanode IO Failure handling 
work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244756=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244756
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 03:13
Start Date: 20/May/19 03:13
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285418524
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketCreateResponse.java
 ##
 @@ -0,0 +1,52 @@
+/**
 
 Review comment:
   @bharatviswa504 why is this change a part of the DoubleBuffer patch? This 
patch should just introduce the DoubleBuffer.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244756)
Time Spent: 3h 40m  (was: 3.5h)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1512) Implement DoubleBuffer in OzoneManager

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1512?focusedWorklogId=244755=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244755
 ]

ASF GitHub Bot logged work on HDDS-1512:


Author: ASF GitHub Bot
Created on: 20/May/19 03:13
Start Date: 20/May/19 03:13
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #810: HDDS-1512. 
Implement DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r285418551
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketDeleteResponse.java
 ##
 @@ -0,0 +1,57 @@
+/**
 
 Review comment:
   Same here. This should be moved to a subsequent patch right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244755)
Time Spent: 3.5h  (was: 3h 20m)

> Implement DoubleBuffer in OzoneManager
> --
>
> Key: HDDS-1512
> URL: https://issues.apache.org/jira/browse/HDDS-1512
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> This Jira is created to implement DoubleBuffer in OzoneManager to flush 
> transactions to OM DB.
>  
> h2. Flushing Transactions to RocksDB:
> We propose using an implementation similar to the HDFS EditsDoubleBuffer.  We 
> shall flush RocksDB transactions in batches, instead of current way of using 
> rocksdb.put() after every operation. At a given time only one batch will be 
> outstanding for flush while newer transactions are accumulated in memory to 
> be flushed later.
>  
> In DoubleBuffer it will have 2 buffers one is currentBuffer, and the other is 
> readyBuffer. We add entry to current buffer, and we check if another flush 
> call is outstanding. If not, we flush to disk Otherwise we add entries to 
> otherBuffer while sync is happening.
>  
> In this if sync is happening, we shall add new requests to other buffer and 
> when we can sync we use *RocksDB batch commit to sync to disk, instead of 
> rocksdb put.*
>  
> Note: If flush to disk is failed on any OM, we shall terminate the 
> OzoneManager, so that OM DB’s will not diverge. Flush failure should be 
> considered as catastrophic failure.
>  
> Scope of this Jira is to add DoubleBuffer implementation, integrating to 
> current OM will be done in further jira's.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843593#comment-16843593
 ] 

Hudson commented on HDDS-1499:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16573 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16573/])
HDDS-1499. OzoneManager Cache. (#798) (arp7: rev 
0d1d7c86ec34fabc62c0e3844aca3733024bc172)
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/cache/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheKey.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestTypedRDBTableStore.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBTable.java
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/TableCache.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/CacheValue.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/cache/TestPartialTableCache.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/EpochEntry.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/package-info.java


> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1422) Exception during DataNode shutdown

2019-05-19 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843586#comment-16843586
 ] 

Arpit Agarwal commented on HDDS-1422:
-

Thanks [~bharatviswa] for the reviews and commit!

> Exception during DataNode shutdown
> --
>
> Key: HDDS-1422
> URL: https://issues.apache.org/jira/browse/HDDS-1422
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The following exception during DN shutdown should be avoided, as it adds 
> noise to the logs and is not a real issue.
> {code}
> 2019-04-10 17:48:27,307 WARN  volume.VolumeSet 
> (VolumeSet.java:getNodeReport(476)) - Failed to get scmUsed and remaining for 
> container storage location 
> /Users/agarwal/src/hadoop/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-f4d89966-146a-4690-8841-36af1993522f/datanode-17/data/containers
> java.io.IOException: Volume Usage thread is not running. This error is 
> usually seen during DataNode shutdown.
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:119)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:472)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:238)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:115)
>   at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1499:

Fix Version/s: (was: 0.4.0)
   0.5.0

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1499:

  Resolution: Fixed
   Fix Version/s: 0.4.0
Target Version/s:   (was: 0.5.0)
  Status: Resolved  (was: Patch Available)

I've committed this to trunk. Thanks for the contribution [~bharatviswa], and 
thanks [~anu] and [~hanishakoneru] for the reviews!

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=244739=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244739
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 20/May/19 02:23
Start Date: 20/May/19 02:23
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244739)
Time Spent: 12h  (was: 11h 50m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=244737=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244737
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 20/May/19 02:22
Start Date: 20/May/19 02:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r285413624
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
 ##
 @@ -44,6 +44,7 @@
*/
   Table getTable(String name) throws IOException;
 
+
 
 Review comment:
   Committing this for now since Anu +1'ed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244737)
Time Spent: 11h 40m  (was: 11.5h)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=244738=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244738
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 20/May/19 02:22
Start Date: 20/May/19 02:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#issuecomment-493818528
 
 
   +1 LGTM.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244738)
Time Spent: 11h 50m  (was: 11h 40m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=244736=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244736
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 20/May/19 02:21
Start Date: 20/May/19 02:21
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r285413487
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/TableCache.java
 ##
 @@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+/**
+ * Cache used for RocksDB tables.
+ * @param 
+ * @param 
+ */
+
+@Private
+@Evolving
+public interface TableCache OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1499) OzoneManager Cache

2019-05-19 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=244735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-244735
 ]

ASF GitHub Bot logged work on HDDS-1499:


Author: ASF GitHub Bot
Created on: 20/May/19 02:20
Start Date: 20/May/19 02:20
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r285413415
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -106,6 +142,40 @@ public void close() throws Exception {
 
   }
 
+  @Override
+  public void addCacheEntry(CacheKey cacheKey,
+  CacheValue cacheValue) {
+// This will override the entry if there is already entry for this key.
+cache.put(cacheKey, cacheValue);
+  }
+
+
+  @Override
+  public void cleanupCache(long epoch) {
 
 Review comment:
   We can revisit this later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 244735)
Time Spent: 11h 20m  (was: 11h 10m)

> OzoneManager Cache
> --
>
> Key: HDDS-1499
> URL: https://issues.apache.org/jira/browse/HDDS-1499
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to 
> flush transaction in a batch, instead of using rocksdb put() for every 
> operation. When this comes in to place we need cache in OzoneManager HA to 
> handle/server the requests for validation/returning responses.
>  
> This Jira will implement Cache as an integral part of the table. In this way 
> users using this table does not need to handle like check cache/db. For this, 
> we can update get API in the table to handle the cache.
>  
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1558) IllegalArgumentException while processing container Reports

2019-05-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1558:
---

 Summary: IllegalArgumentException while processing container 
Reports
 Key: HDDS-1558
 URL: https://issues.apache.org/jira/browse/HDDS-1558
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


IllegalArgumentException while processing container Reports

{code}
2019-05-19 23:15:04,137 ERROR events.SingleThreadExecutor 
(SingleThreadExecutor.java:lambda$onMessage$1(88)) - Error on execution message 
org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@1a117ebc
java.lang.IllegalArgumentException
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.updateContainerState(AbstractContainerReportHandler.java:178)
at 
org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.processContainerReplica(AbstractContainerReportHandler.java:85)
at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:124)
at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
at 
org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:46)
at 
org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843460#comment-16843460
 ] 

Hadoop QA commented on HDFS-14486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969106/HDFS-14486-03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3f29db40dff8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 732133c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26806/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26806/testReport/ |
| Max. process+thread count | 2931 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843418#comment-16843418
 ] 

Ayush Saxena commented on HDFS-14486:
-

Uploaded patch, checking in the changes reported which all made sense to me.
Mostly all reported were DiskErrorException being thrown for misconfigured 
values which I feel are really very Unlikely. HDFS-14469 depicts a case which 
adds some good value as for misconfig if DiskErrorException is thrown it is 
retried, and after retry also it shall fail.
I have changed this behavior as reported by [~ebugs-in-cloud-systems] in the 
report.
Pls Review!!!


> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch, HDFS-14486-02.patch, 
> HDFS-14486-03.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14486:

Attachment: HDFS-14486-03.patch

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch, HDFS-14486-02.patch, 
> HDFS-14486-03.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843406#comment-16843406
 ] 

Hadoop QA commented on HDFS-14486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 214 unchanged - 0 fixed = 215 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969095/HDFS-14486-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0968b3fab933 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 732133c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26805/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26805/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26805/testReport/ |
| Max. process+thread 

[jira] [Updated] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14486:

Attachment: HDFS-14486-02.patch

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch, HDFS-14486-02.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843362#comment-16843362
 ] 

Hadoop QA commented on HDFS-14486:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.checker.TestStorageLocationChecker |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969090/HDFS-14486-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d6fa9ff3960c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 732133c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26804/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26804/testReport/ |
| Max. process+thread count | 2936 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Created] (HDDS-1557) Datanode exits because Ratis fails to shutdown ratis server

2019-05-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1557:
---

 Summary: Datanode exits because Ratis fails to shutdown ratis 
server 
 Key: HDDS-1557
 URL: https://issues.apache.org/jira/browse/HDDS-1557
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh


Datanode exits because Ratis fails to shutdown ratis server 

{code}
2019-05-19 12:07:19,276 INFO  impl.RaftServerImpl 
(RaftServerImpl.java:checkInconsistentAppendEntries(965)) - 
80747533-f47c-43de-85b8-e70db448c63f: inconsistency entries. 
Reply:99930d0a-72ab-4795-a3ac-f3c
fb61ca1bb<-80747533-f47c-43de-85b8-e70db448c63f#3132:FAIL,INCONSISTENCY,nextIndex:9057,term:33,followerCommit:9057
2019-05-19 12:07:19,276 WARN  impl.RaftServerProxy 
(RaftServerProxy.java:lambda$close$4(320)) - 
e143b976-ab35-4555-a800-7f05a2b1b738: Failed to close GRPC server
java.io.InterruptedIOException: e143b976-ab35-4555-a800-7f05a2b1b738: shutdown 
server with port 64605 failed
at 
org.apache.ratis.util.IOUtils.toInterruptedIOException(IOUtils.java:48)
at 
org.apache.ratis.grpc.server.GrpcService.closeImpl(GrpcService.java:160)
at 
org.apache.ratis.server.impl.RaftServerRpcWithProxy.lambda$close$2(RaftServerRpcWithProxy.java:76)
at 
org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$2(LifeCycle.java:231)
at 
org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:251)
at 
org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:229)
at 
org.apache.ratis.server.impl.RaftServerRpcWithProxy.close(RaftServerRpcWithProxy.java:76)
at 
org.apache.ratis.server.impl.RaftServerProxy.lambda$close$4(RaftServerProxy.java:318)
at 
org.apache.ratis.util.LifeCycle.lambda$checkStateAndClose$2(LifeCycle.java:231)
at 
org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:251)
at 
org.apache.ratis.util.LifeCycle.checkStateAndClose(LifeCycle.java:229)
at 
org.apache.ratis.server.impl.RaftServerProxy.close(RaftServerProxy.java:313)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.stop(XceiverServerRatis.java:432)
at 
org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.stop(OzoneContainer.java:201)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.close(DatanodeStateMachine.java:270)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.stopDaemon(DatanodeStateMachine.java:394)
at 
org.apache.hadoop.ozone.HddsDatanodeService.stop(HddsDatanodeService.java:449)
at 
org.apache.hadoop.ozone.HddsDatanodeService.terminateDatanode(HddsDatanodeService.java:429)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:208)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:349)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl.awaitTermination(ServerImpl.java:282)
at 
org.apache.ratis.grpc.server.GrpcService.closeImpl(GrpcService.java:158)
... 19 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1556) SCM should update leader information of a pipeline from pipeline report

2019-05-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1556:
---

 Summary: SCM should update leader information of a pipeline from 
pipeline report
 Key: HDDS-1556
 URL: https://issues.apache.org/jira/browse/HDDS-1556
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh


SCM currently hands out nodes in a static order which has no information about 
the leader of the ring.  SCM should learn about the leader in either of the 
following 2 ways.

a) Ratis should add a callback to statemachine, which inturn notifies SCM of 
the state change.
b) SCM periodically updates this information from the heartbeat.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14490) RBF: Remove unnecessary quota checks

2019-05-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14490:

Fix Version/s: HDFS-13891

> RBF: Remove unnecessary quota checks
> 
>
> Key: HDFS-14490
> URL: https://issues.apache.org/jira/browse/HDFS-14490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14490-HDFS-13891-01.patch, 
> HDFS-14490-HDFS-13891-02.patch, HDFS-14490-HDFS-13891-03.patch
>
>
> Remove unnecessary quota checks for unrelated operations such as setEcPolicy, 
> getEcPolicy and similar  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14486:

Assignee: Ayush Saxena
  Status: Patch Available  (was: Open)

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-14486-01.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14486) The exception classes in some throw statements do not accurately describe why they are thrown

2019-05-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14486:

Attachment: HDFS-14486-01.patch

> The exception classes in some throw statements do not accurately describe why 
> they are thrown
> -
>
> Key: HDFS-14486
> URL: https://issues.apache.org/jira/browse/HDFS-14486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: eBugs
>Priority: Minor
> Attachments: HDFS-14486-01.patch
>
>
> Dear HDFS developers, we are developing a tool to detect exception-related 
> bugs in Java. Our prototype has spotted a few {{throw}} statements whose 
> exception class does not accurately describe why they are thrown. This can be 
> dangerous since it makes correctly handling them challenging. For example, in 
> an old bug, HDFS-8224, throwing a general {{IOException}} makes it difficult 
> to perform data recovery specifically when a metadata file is corrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org