[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.012.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch, HDDS-325.012.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-561:
-
Attachment: HDDS-561.001.patch

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-561:
-
Status: Patch Available  (was: Open)

> Move Node2ContainerMap and Node2PipelineMap to NodeManager
> --
>
> Key: HDDS-561
> URL: https://issues.apache.org/jira/browse/HDDS-561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-561.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-561) Move Node2ContainerMap and Node2PipelineMap to NodeManager

2018-09-27 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-561:


 Summary: Move Node2ContainerMap and Node2PipelineMap to NodeManager
 Key: HDDS-561
 URL: https://issues.apache.org/jira/browse/HDDS-561
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-24 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16625409#comment-16625409
 ] 

Lokesh Jain commented on HDFS-13876:


Thanks [~smeng] for updating the patch! The patch looks good to me. +1

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch, 
> HDFS-13876.003.patch, HDFS-13876.004.patch, HDFS-13876.005.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623096#comment-16623096
 ] 

Lokesh Jain commented on HDFS-13876:


[~smeng] Thanks for updating the patch! I have a few minor comments.
 # TestHttpFSServer#testDisallowSnapshot:1164 - Comment should be "FileStatus 
should (not) have snapshot enabled bit set"
 # BaseTestHttpFSWith#testDisallowSnapshotException:1431 - Error condition 
should be "disallowSnapshot should not have succeeded".
 # Can you please fix the checkstyle issues?

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch, HDFS-13876.002.patch, 
> HDFS-13876.003.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623078#comment-16623078
 ] 

Lokesh Jain commented on HDFS-13893:


[~arpitagarwal] Thanks for reviewing the patch! I have used CommandLine.getArgs 
in the patch. For a command used below.

 
{code:java}
hdfs diskbalancer random1 -report random2 random3
{code}
the getArgs() would return the below array.
{code:java}
[hdfs, diskbalancer, random1, random2, random3]{code}
Therefore the patch throws exception if args.length > 2.

 

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13893.001.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-20 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.011.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622181#comment-16622181
 ] 

Lokesh Jain commented on HDDS-325:
--

v11 patch fixes the test failure.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch, HDDS-325.011.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13876) HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16622167#comment-16622167
 ] 

Lokesh Jain commented on HDFS-13876:


[~smeng] Thanks for working on this! The patch looks very good to me. I have a 
few minor comments.
 # We could use createSnapshotTestsPreconditions for testDisallowSnapshot in 
BaseTestHttpFSWith. We can use it for testAllowSnapshot as well by including an 
extra flag for doing an allowSnapshot.
 # Similarly we can use snapshotTestPreconditions in TestHttpFSServer.

> HttpFS: Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT
> -
>
> Key: HDFS-13876
> URL: https://issues.apache.org/jira/browse/HDFS-13876
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: httpfs
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13876.001.patch
>
>
> Implement ALLOWSNAPSHOT, DISALLOWSNAPSHOT (from HDFS-9057) in HttpFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621867#comment-16621867
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v10 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-20 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.010.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, 
> HDDS-325.009.patch, HDDS-325.010.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Attachment: HDDS-464.004.patch

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch, HDDS-464.002.patch, 
> HDDS-464.003.patch, HDDS-464.004.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.352 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] 
> testBlockWriteViaRatis(org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient)
>  Time elapsed: 1.235 s <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:211)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.createContainer(ContainerProtocolCalls.java:297)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:197)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:47)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWriteViaRatis(TestCloseContainerHandlingByClient.java:403)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-17 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617863#comment-16617863
 ] 

Lokesh Jain commented on HDDS-464:
--

[~shashikant] Thanks for reviewing the patch! v4 patch addresses your comments.

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch, HDDS-464.002.patch, 
> HDDS-464.003.patch, HDDS-464.004.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.352 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] 
> testBlockWriteViaRatis(org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient)
>  Time elapsed: 1.235 s <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:211)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.createContainer(ContainerProtocolCalls.java:297)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:197)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:47)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWriteViaRatis(TestCloseContainerHandlingByClient.java:403)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13893:
---
Status: Patch Available  (was: Open)

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13893.001.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13893:
---
Attachment: HDFS-13893.001.patch

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13893.001.patch
>
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-480) Add proto helper method to DatanodeDetails#Port

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-480:
-
Description: (was: Currently raft log does not make sure that any 
appendEntry has a term greater than or equal to the last applied entry's term 
in the log. This Jira aims to add that check.)

> Add proto helper method to DatanodeDetails#Port
> ---
>
> Key: HDDS-480
> URL: https://issues.apache.org/jira/browse/HDDS-480
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nanda kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-480) Add proto helper method to DatanodeDetails#Port

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-480:
-
Summary: Add proto helper method to DatanodeDetails#Port  (was: RaftLog 
should make sure appendEntries term are incremental in nature)

> Add proto helper method to DatanodeDetails#Port
> ---
>
> Key: HDDS-480
> URL: https://issues.apache.org/jira/browse/HDDS-480
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>
> Currently raft log does not make sure that any appendEntry has a term greater 
> than or equal to the last applied entry's term in the log. This Jira aims to 
> add that check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-480) Add proto helper method to DatanodeDetails#Port

2018-09-17 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-480:


Assignee: Nanda kumar  (was: Lokesh Jain)

> Add proto helper method to DatanodeDetails#Port
> ---
>
> Key: HDDS-480
> URL: https://issues.apache.org/jira/browse/HDDS-480
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Nanda kumar
>Priority: Major
>
> Currently raft log does not make sure that any appendEntry has a term greater 
> than or equal to the last applied entry's term in the log. This Jira aims to 
> add that check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-480) RaftLog should make sure appendEntries term are incremental in nature

2018-09-17 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-480:


 Summary: RaftLog should make sure appendEntries term are 
incremental in nature
 Key: HDDS-480
 URL: https://issues.apache.org/jira/browse/HDDS-480
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently raft log does not make sure that any appendEntry has a term greater 
than or equal to the last applied entry's term in the log. This Jira aims to 
add that check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-475) Block Allocation returns same BlockID on different keys creation

2018-09-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-475:
-
Affects Version/s: (was: 0.2.1)

> Block Allocation returns same BlockID on different keys creation
> 
>
> Key: HDDS-475
> URL: https://issues.apache.org/jira/browse/HDDS-475
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
>
> BlockManagerImpl#allocateBlock returns same BlockID. This leads to different 
> key creations getting the same blockId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-472) TestDataValidate fails in trunk

2018-09-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-472:


Assignee: Lokesh Jain

> TestDataValidate fails in trunk
> ---
>
> Key: HDDS-472
> URL: https://issues.apache.org/jira/browse/HDDS-472
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Lokesh Jain
>Priority: Blocker
>
> {code:java}
> [INFO] Running org.apache.hadoop.ozone.freon.TestDataValidate
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 17.326 s <<< FAILURE! - in org.apache.hadoop.ozone.freon.TestDataValidate
> [ERROR] validateWriteTest(org.apache.hadoop.ozone.freon.TestDataValidate) 
> Time elapsed: 2.026 s <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<7>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.junit.Assert.assertEquals(Assert.java:542)
> at 
> org.apache.hadoop.ozone.freon.TestDataValidate.validateWriteTest(TestDataValidate.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-475) Block Allocation returns same BlockID on different keys creation

2018-09-16 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616696#comment-16616696
 ] 

Lokesh Jain commented on HDDS-475:
--

It appears that this might be happening only in the case of containers in OPEN 
state. The code block which returns new BlockID for open containers is not 
synchronized. [~nandakumar131] Why do we take a read lock in allocateBlock 
call? This lock is never taken by any other function and may not be required 
since we have already made the code blocks inside it synchronised?

> Block Allocation returns same BlockID on different keys creation
> 
>
> Key: HDDS-475
> URL: https://issues.apache.org/jira/browse/HDDS-475
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
>
> BlockManagerImpl#allocateBlock returns same BlockID. This leads to different 
> key creations getting the same blockId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-475) Block Allocation returns same BlockID on different keys creation

2018-09-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-475:
-
Fix Version/s: 0.2.1

> Block Allocation returns same BlockID on different keys creation
> 
>
> Key: HDDS-475
> URL: https://issues.apache.org/jira/browse/HDDS-475
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
>
> BlockManagerImpl#allocateBlock returns same BlockID. This leads to different 
> key creations getting the same blockId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-475) Block Allocation returns same BlockID on different keys creation

2018-09-16 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-475:


 Summary: Block Allocation returns same BlockID on different keys 
creation
 Key: HDDS-475
 URL: https://issues.apache.org/jira/browse/HDDS-475
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.2.1
Reporter: Lokesh Jain
Assignee: Lokesh Jain


BlockManagerImpl#allocateBlock returns same BlockID. This leads to different 
key creations getting the same blockId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2018-09-16 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616635#comment-16616635
 ] 

Lokesh Jain commented on HDDS-451:
--

The retry cache timeout config will be added to ozone via HDDS-464. I have set 
the default value to 10 minutes.

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
> 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk 
> request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-16 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616634#comment-16616634
 ] 

Lokesh Jain commented on HDDS-464:
--

[~shashikant] Please take a look at the v3 patch.

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch, HDDS-464.002.patch, 
> HDDS-464.003.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.352 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] 
> testBlockWriteViaRatis(org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient)
>  Time elapsed: 1.235 s <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:211)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.createContainer(ContainerProtocolCalls.java:297)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:197)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:47)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWriteViaRatis(TestCloseContainerHandlingByClient.java:403)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Attachment: HDDS-464.003.patch

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch, HDDS-464.002.patch, 
> HDDS-464.003.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 21.352 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient
> [ERROR] 
> testBlockWriteViaRatis(org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient)
>  Time elapsed: 1.235 s <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:211)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.createContainer(ContainerProtocolCalls.java:297)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:197)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:47)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWriteViaRatis(TestCloseContainerHandlingByClient.java:403)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2018-09-15 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616289#comment-16616289
 ] 

Lokesh Jain edited comment on HDDS-451 at 9/15/18 1:19 PM:
---

Actually I am wrong in the above calculations. I have not considered the time 
which could be spent in ratis client request timeout which defaults to 3 
seconds. Now the maximum time spent can be numRetries*(retryInterval + 
requestTimeoutDuration) which is equal to 50*(200+3000) = 160 seconds. This is 
more than the default retry cache timeout of 60 seconds and could very much be 
a possible reason for the bug.


was (Author: ljain):
Actually I am wrong in the above calculations. I have not considered the time 
which could be spent in ratis client request timeout which defaults to 3 
seconds. Now the maximum time spent can be numRetries*(retryInterval + 
requestTimeoutDuration) which is equal to 160 seconds. This is less than the 
default retry cache timeout of 60 seconds and could very much be a possible 
reason for the bug.

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - 

[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-15 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Attachment: HDDS-464.002.patch

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch, HDDS-464.002.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2018-09-15 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616289#comment-16616289
 ] 

Lokesh Jain commented on HDDS-451:
--

Actually I am wrong in the above calculations. I have not considered the time 
which could be spent in ratis client request timeout which defaults to 3 
seconds. Now the maximum time spent can be numRetries*(retryInterval + 
requestTimeoutDuration) which is equal to 160 seconds. This is less than the 
default retry cache timeout of 60 seconds and could very much be a possible 
reason for the bug.

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
> 54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk 
> request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-15 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Status: Patch Available  (was: Open)

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-15 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Attachment: HDDS-464.001.patch

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-464.001.patch
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2018-09-15 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615622#comment-16615622
 ] 

Lokesh Jain commented on HDDS-451:
--

[~szetszwo] The retry policy set in ozone would make the client retry for 50 * 
200ms = 10 secs. This is less than retry cache timeout of 60 secs. Therefore 
this case should not arrive that a client retries and retry cache entry becomes 
invalid which in turn causes the request to be resubmitted at the server.

I had another case in mind. When a new leader is elected it places a 
placeholder entry into the log. There might be a race condition where the 
placeholder index returned might not be the last entry. This can happen if 
there is an appendEntry executing in parallel. The appendEntry might have 
passed the validation stage where the role and leaderId is checked. After that 
it can apply entries to the raft log and the appended entries might get 
appended after the placeholder index. Its a rare scenario but is this something 
which could cause the above error?

> PutKey failed due to error "Rejecting write chunk request. Chunk overwrite 
> without explicit request"
> 
>
> Key: HDDS-451
> URL: https://issues.apache.org/jira/browse/HDDS-451
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: all-node-ozone-logs-1536841590.tar.gz
>
>
> steps taken :
> --
>  # Ran Put Key command to write 50GB data. Put Key client operation failed 
> after 17 mins.
> error seen  ozone.log :
> 
>  
> {code}
> 2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
>  tmp chunk file
> 2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
> writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:WRITE_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
> (ChunkManagerImpl.java:85) - writing 
> chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  chunk stage:COMMIT_DATA chunk 
> file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
>  tmp chunk file
> 2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 206
> 2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:148) - Executing cycle Number : 207
> 2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
> Rejecting write chunk request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
> Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
> Message: Rejecting write chunk request. OverWrite flag 
> required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
> (ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite 
> without explicit request. 
> ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
>  offset=0, len=16777216}
> 2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
> (ContainerUtils.java:149) - Operation: 

[jira] [Assigned] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-464:


Assignee: Lokesh Jain

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-464:
-
Fix Version/s: 0.2.1

> Fix TestCloseContainerHandlingByClient
> --
>
> Key: HDDS-464
> URL: https://issues.apache.org/jira/browse/HDDS-464
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
> AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-464) Fix TestCloseContainerHandlingByClient

2018-09-14 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-464:


 Summary: Fix TestCloseContainerHandlingByClient
 Key: HDDS-464
 URL: https://issues.apache.org/jira/browse/HDDS-464
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain


testBlockWriteViaRatis and testMultiBlockWrites2 fail with NPE and 
AssertionError respectively.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-14 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615141#comment-16615141
 ] 

Lokesh Jain commented on HDDS-419:
--

[~jnp] uploaded v3 patch which fixes the findbugs. Test failure is not related.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch, 
> HDDS-419.003.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-14 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-419:
-
Attachment: HDDS-419.003.patch

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch, 
> HDDS-419.003.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-14 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614344#comment-16614344
 ] 

Lokesh Jain edited comment on HDDS-419 at 9/14/18 4:47 PM:
---

[~xyao] Thanks for reviewing the patch! v2 patch addresses your comments. The 
fix in StringUtils has been done by Dinesh in HDDS-456.


was (Author: ljain):
[~xyao] Thanks for reviewing the patch! v3 patch addresses your comments. The 
fix in StringUtils has been done by Dinesh in HDDS-456.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-14 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614505#comment-16614505
 ] 

Lokesh Jain commented on HDDS-419:
--

[~dineshchitlangia] Thanks for clarifying! I have created HADOOP-15755 to track 
the same.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16614344#comment-16614344
 ] 

Lokesh Jain commented on HDDS-419:
--

[~xyao] Thanks for reviewing the patch! v3 patch addresses your comments. The 
fix in StringUtils has been done by Dinesh in HDDS-456.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613798#comment-16613798
 ] 

Lokesh Jain commented on HDDS-419:
--

Uploaded v2 patch which throws an exception if there is an inconsistency in 
chunk entries.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-419:
-
Attachment: HDDS-419.002.patch

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch, HDDS-419.002.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-12 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612430#comment-16612430
 ] 

Lokesh Jain edited comment on HDDS-419 at 9/12/18 4:35 PM:
---

[~msingh] Thanks for working on this! Please find my comments below.
 # ChunkGroupInputStream: 118-123 - I agree with [~xyao] and [~ajayydv]. 
Ideally we should not have a case where actualLen not equal to readLen. But in 
case we encounter such a scenario we should log it. We should also throw an 
exception in such a case. This can only occur if one of the chunk entry gives 
an incorrect length or if chunk file is truncated.
 # We can also rename readLen and actualLen to numBytesToRead and numBytesRead 
respectively or some better names.


was (Author: ljain):
[~msingh] Thanks for working on this! Please find my comments below.
 # ChunkGroupInputStream: 118-123 - I agree with [~xyao] and [~ajayydv]. 
Ideally we should not have a case where the number of bytes read is not equal 
to number of bytes to read. But in case we encounter such a scenario we should 
log it. We should also throw an exception in such a case. This can only occur 
if one of the chunk entry gives an incorrect length or if chunk file is 
truncated.
 # We can also rename readLen and actualLen to numBytesToRead and numBytesRead 
respectively or some better names.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-12 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612430#comment-16612430
 ] 

Lokesh Jain commented on HDDS-419:
--

[~msingh] Thanks for working on this! Please find my comments below.
 # ChunkGroupInputStream: 118-123 - I agree with [~xyao] and [~ajayydv]. 
Ideally we should not have a case where the number of bytes read is not equal 
to number of bytes to read. But in case we encounter such a scenario we should 
log it. We should also throw an exception in such a case. This can only occur 
if one of the chunk entry gives an incorrect length or if chunk file is 
truncated.
 # We can also rename readLen and actualLen to numBytesToRead and numBytesRead 
respectively or some better names.

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611605#comment-16611605
 ] 

Lokesh Jain commented on HDDS-433:
--

[~hanishakoneru] This case would never arrive. The readStateMachineData api is 
called only when stateMachineDataAttached is true.

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.
> The following exception was seen using Ozone, where the leader send incorrect 
> append entries to follower.
> {code}
> 2018-08-20 07:54:06,200 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-RaftLogWorker index 
> to:20312
> 2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.FollowerState: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
> lastRpcTime:1182, electionTimeout:990ms
> 2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
> for changeToCandidate
> 2018-08-20 07:54:07,801 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 14 
> for changeToFollower
> 2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.FollowerState: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
> lastRpcTime:2167, electionTimeout:976ms
> 2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
> for changeToCandidate
> 2018-08-20 07:54:21,715 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from 
> 2bf278ca-2dad-4029-a387-2faeb10adef5_9858 to null at term 14 for ini
> tElection
> 2018-08-20 07:54:29,151 INFO org.apache.ratis.server.impl.LeaderElection: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: begin an election in Term 15
> 2018-08-20 07:54:30,735 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 15 
> for changeToFollower
> 2018-08-20 07:54:30,740 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from null to 
> b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 at term 15 for app
> endEntries
>  
> 2018-08-20 07:54:30,741 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-org.apache.ratis.server.impl.RoleInfo@6b1e0fb8:
>  Withhold vote from candidate b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 with 
> term 15. State: leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, term=15, 
> lastRpcElapsed=0ms
>  
> 2018-08-20 07:54:30,745 INFO org.apache.ratis.server.impl.LeaderElection: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Election REJECTED; received 1 
> response(s) [2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858<-2
> bf278ca-2dad-4029-a387-2faeb10adef5_9858#0:FAIL-t15] and 0 exception(s); 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858:t15, 
> leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, 
> voted=2e240240-0fac-4f93-8aa8-fa8f
> 74bf1810_9858, raftlog=[(t:14, i:20374)], 
> conf=[b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858:172.26.32.231:9858, 
> 2bf278ca-2dad-4029-a387-2faeb10adef5_9858:172.26.32.230:9858, 
> 2e240240-0fac-4f93-8aa8-fa8f74bf
> 1810_9858:172.26.32.228:9858], old=null
> 2018-08-20 07:54:31,227 WARN 
> org.apache.ratis.grpc.server.RaftServerProtocolService: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Failed appendEntries 
> b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858->2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858#1
> java.lang.IllegalStateException: Unexpected Index: previous is (t:14, 
> i:20374) but entries[0].getIndex()=0
> at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:60)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.validateEntries(RaftServerImpl.java:786)
> at 
> 

[jira] [Commented] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611077#comment-16611077
 ] 

Lokesh Jain commented on HDDS-433:
--

[~hanishakoneru] Thanks for reviewing the patch!
{code:java}
SMLogEntryProto.newBuilder(smLogEntryProto)
{code}
makes sure that all the fields of smLogEntryProto are used in the new object. 
Therefore we do not need to explicitly set it.

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.
> The following exception was seen using Ozone, where the leader send incorrect 
> append entries to follower.
> {code}
> 2018-08-20 07:54:06,200 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-RaftLogWorker index 
> to:20312
> 2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.FollowerState: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
> lastRpcTime:1182, electionTimeout:990ms
> 2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
> for changeToCandidate
> 2018-08-20 07:54:07,801 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 14 
> for changeToFollower
> 2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.FollowerState: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
> lastRpcTime:2167, electionTimeout:976ms
> 2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
> for changeToCandidate
> 2018-08-20 07:54:21,715 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from 
> 2bf278ca-2dad-4029-a387-2faeb10adef5_9858 to null at term 14 for ini
> tElection
> 2018-08-20 07:54:29,151 INFO org.apache.ratis.server.impl.LeaderElection: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: begin an election in Term 15
> 2018-08-20 07:54:30,735 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
> org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 15 
> for changeToFollower
> 2018-08-20 07:54:30,740 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from null to 
> b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 at term 15 for app
> endEntries
>  
> 2018-08-20 07:54:30,741 INFO org.apache.ratis.server.impl.RaftServerImpl: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-org.apache.ratis.server.impl.RoleInfo@6b1e0fb8:
>  Withhold vote from candidate b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 with 
> term 15. State: leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, term=15, 
> lastRpcElapsed=0ms
>  
> 2018-08-20 07:54:30,745 INFO org.apache.ratis.server.impl.LeaderElection: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Election REJECTED; received 1 
> response(s) [2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858<-2
> bf278ca-2dad-4029-a387-2faeb10adef5_9858#0:FAIL-t15] and 0 exception(s); 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858:t15, 
> leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, 
> voted=2e240240-0fac-4f93-8aa8-fa8f
> 74bf1810_9858, raftlog=[(t:14, i:20374)], 
> conf=[b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858:172.26.32.231:9858, 
> 2bf278ca-2dad-4029-a387-2faeb10adef5_9858:172.26.32.230:9858, 
> 2e240240-0fac-4f93-8aa8-fa8f74bf
> 1810_9858:172.26.32.228:9858], old=null
> 2018-08-20 07:54:31,227 WARN 
> org.apache.ratis.grpc.server.RaftServerProtocolService: 
> 2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Failed appendEntries 
> b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858->2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858#1
> java.lang.IllegalStateException: Unexpected Index: previous is (t:14, 
> i:20374) but entries[0].getIndex()=0
> at 
> org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:60)
> at 
> 

[jira] [Updated] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-433:
-
Description: 
ContainerStateMachine#readStateMachineData returns LogEntryProto with index set 
to 0. This leads to exception in Ratis. The LogEntryProto to return should be 
built over the input LogEntryProto.

The following exception was seen using Ozone, where the leader send incorrect 
append entries to follower.

{code}
2018-08-20 07:54:06,200 INFO org.apache.ratis.server.storage.RaftLogWorker: 
Rolling segment:2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-RaftLogWorker index 
to:20312
2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.FollowerState: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
lastRpcTime:1182, electionTimeout:990ms
2018-08-20 07:54:07,800 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
for changeToCandidate
2018-08-20 07:54:07,801 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 14 
for changeToFollower
2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.FollowerState: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes to CANDIDATE, 
lastRpcTime:2167, electionTimeout:976ms
2018-08-20 07:54:21,712 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to CANDIDATE at term 14
for changeToCandidate
2018-08-20 07:54:21,715 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from 
2bf278ca-2dad-4029-a387-2faeb10adef5_9858 to null at term 14 for ini
tElection
2018-08-20 07:54:29,151 INFO org.apache.ratis.server.impl.LeaderElection: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: begin an election in Term 15
2018-08-20 07:54:30,735 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858 changes role from 
org.apache.ratis.server.impl.RoleInfo@6b1e0fb8 to FOLLOWER at term 15 
for changeToFollower
2018-08-20 07:54:30,740 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: change Leader from null to 
b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 at term 15 for app
endEntries
 
2018-08-20 07:54:30,741 INFO org.apache.ratis.server.impl.RaftServerImpl: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858-org.apache.ratis.server.impl.RoleInfo@6b1e0fb8:
 Withhold vote from candidate b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858 with 
term 15. State: leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, term=15, 
lastRpcElapsed=0ms
 
2018-08-20 07:54:30,745 INFO org.apache.ratis.server.impl.LeaderElection: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Election REJECTED; received 1 
response(s) [2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858<-2
bf278ca-2dad-4029-a387-2faeb10adef5_9858#0:FAIL-t15] and 0 exception(s); 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858:t15, 
leader=b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858, 
voted=2e240240-0fac-4f93-8aa8-fa8f
74bf1810_9858, raftlog=[(t:14, i:20374)], 
conf=[b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858:172.26.32.231:9858, 
2bf278ca-2dad-4029-a387-2faeb10adef5_9858:172.26.32.230:9858, 
2e240240-0fac-4f93-8aa8-fa8f74bf
1810_9858:172.26.32.228:9858], old=null
2018-08-20 07:54:31,227 WARN 
org.apache.ratis.grpc.server.RaftServerProtocolService: 
2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858: Failed appendEntries 
b6aaaf2c-2cbf-498f-995c-09cb2bb97cf4_9858->2e240240-0fac-4f93-8aa8-fa8f74bf1810_9858#1
java.lang.IllegalStateException: Unexpected Index: previous is (t:14, i:20374) 
but entries[0].getIndex()=0
at org.apache.ratis.util.Preconditions.assertTrue(Preconditions.java:60)
at 
org.apache.ratis.server.impl.RaftServerImpl.validateEntries(RaftServerImpl.java:786)
at 
org.apache.ratis.server.impl.RaftServerImpl.appendEntriesAsync(RaftServerImpl.java:859)
at 
org.apache.ratis.server.impl.RaftServerImpl.appendEntriesAsync(RaftServerImpl.java:824)
at 
org.apache.ratis.server.impl.RaftServerProxy.appendEntriesAsync(RaftServerProxy.java:247)
at 
org.apache.ratis.grpc.server.RaftServerProtocolService$1.onNext(RaftServerProtocolService.java:76)
at 
org.apache.ratis.grpc.server.RaftServerProtocolService$1.onNext(RaftServerProtocolService.java:66)
at 
org.apache.ratis.shaded.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
at 
org.apache.ratis.shaded.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:252)
at 

[jira] [Updated] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-433:
-
Affects Version/s: (was: 0.2.1)

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-433:
-
Fix Version/s: 0.2.1

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-433:
-
Status: Patch Available  (was: Open)

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-433:
-
Attachment: HDDS-433.001.patch

> ContainerStateMachine#readStateMachineData should properly build LogEntryProto
> --
>
> Key: HDDS-433
> URL: https://issues.apache.org/jira/browse/HDDS-433
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-433.001.patch
>
>
> ContainerStateMachine#readStateMachineData returns LogEntryProto with index 
> set to 0. This leads to exception in Ratis. The LogEntryProto to return 
> should be built over the input LogEntryProto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-433) ContainerStateMachine#readStateMachineData should properly build LogEntryProto

2018-09-11 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-433:


 Summary: ContainerStateMachine#readStateMachineData should 
properly build LogEntryProto
 Key: HDDS-433
 URL: https://issues.apache.org/jira/browse/HDDS-433
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.2.1
Reporter: Lokesh Jain
Assignee: Lokesh Jain


ContainerStateMachine#readStateMachineData returns LogEntryProto with index set 
to 0. This leads to exception in Ratis. The LogEntryProto to return should be 
built over the input LogEntryProto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-422) ContainerStateMachine.readStateMachineData throws OverlappingFileLockException

2018-09-10 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-422:


 Summary: ContainerStateMachine.readStateMachineData throws 
OverlappingFileLockException
 Key: HDDS-422
 URL: https://issues.apache.org/jira/browse/HDDS-422
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


 
{code:java}
2018-09-06 23:11:41,386 ERROR org.apache.ratis.server.impl.LogAppender: 
GRpcLogAppender(d95c60fd-0e23-4237-8135-e05a326b952d_9858 -> 
954e7a3b-b20e-43a5-8f82-4381872aa7bb_9858) hit IOException while loadin
g raft log
org.apache.ratis.server.storage.RaftLogIOException: 
d95c60fd-0e23-4237-8135-e05a326b952d_9858: Failed readStateMachineData for 
(t:39, i:667)SMLOGENTRY, client-CD988394E416, cid=90
at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:360)
at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:165)
at org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:214)
at 
org.apache.ratis.grpc.server.GRpcLogAppender.appendLog(GRpcLogAppender.java:148)
at 
org.apache.ratis.grpc.server.GRpcLogAppender.runAppenderImpl(GRpcLogAppender.java:92)
at org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:101)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.OverlappingFileLockException
at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)
at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)
at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)
at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)
at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)
at 
org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:176)
at 
org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:161)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:598)
at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:201)
at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:217)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:289)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$3(ContainerStateMachine.java:359)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-318) ratis INFO logs should not shown during ozoneFs command-line execution

2018-09-08 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608322#comment-16608322
 ] 

Lokesh Jain commented on HDDS-318:
--

[~szetszwo] Thanks for working on this! The patch looks very good to me. I have 
verified that the log messages of ConfUtils do not appear now. Can we make the 
block static?

 
{code:java}
bin/ozone oz -putKey /vol1/bb1/key1 -file /Users/ljain/Downloads/a.txt
2018-09-08 17:04:44,809 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-09-08 17:04:45,368 INFO util.LogUtils: Set org.apache.ratis.conf.ConfUtils 
log level to WARN
2018-09-08 17:04:48,036 INFO util.LogUtils: Set org.apache.ratis.conf.ConfUtils 
log level to WARN
2018-09-08 17:04:50,375 INFO util.LogUtils: Set org.apache.ratis.conf.ConfUtils 
log level to WARN
2018-09-08 17:04:53,099 INFO util.LogUtils: Set org.apache.ratis.conf.ConfUtils 
log level to WARN
{code}
Then the log in LogUtils would appear only once.

 

> ratis INFO logs should not shown during ozoneFs command-line execution
> --
>
> Key: HDDS-318
> URL: https://issues.apache.org/jira/browse/HDDS-318
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-318.20180907.patch
>
>
> ratis INFOs should not be shown during ozoneFS CLI execution.
> Please find the snippet from one othe execution :
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone fs -put /etc/passwd /p2
> 2018-08-02 12:17:18 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:20 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 12:17:20 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> ..
> ..
> ..
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-416) Fix bug in ChunkInputStreamEntry

2018-09-08 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16608048#comment-16608048
 ] 

Lokesh Jain commented on HDDS-416:
--

[~nandakumar131] currentPosition is not updated in seek call. It is used in 
getRemaining() call which is further used in a read call by 
ChunkGroupInputStream.

> Fix bug in ChunkInputStreamEntry
> 
>
> Key: HDDS-416
> URL: https://issues.apache.org/jira/browse/HDDS-416
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-416.001.patch
>
>
> ChunkInputStreamEntry maintains currentPosition field. This field is 
> redundant and can be replaced by getPos().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-416) Fix bug in ChunkInputStreamEntry

2018-09-07 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-416:
-
Attachment: HDDS-416.001.patch

> Fix bug in ChunkInputStreamEntry
> 
>
> Key: HDDS-416
> URL: https://issues.apache.org/jira/browse/HDDS-416
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-416.001.patch
>
>
> ChunkInputStreamEntry maintains currentPosition field. This field is 
> redundant and can be replaced by getPos().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-416) Fix bug in ChunkInputStreamEntry

2018-09-07 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-416:
-
Status: Patch Available  (was: Open)

> Fix bug in ChunkInputStreamEntry
> 
>
> Key: HDDS-416
> URL: https://issues.apache.org/jira/browse/HDDS-416
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-416.001.patch
>
>
> ChunkInputStreamEntry maintains currentPosition field. This field is 
> redundant and can be replaced by getPos().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-416) Fix bug in ChunkInputStreamEntry

2018-09-07 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-416:


 Summary: Fix bug in ChunkInputStreamEntry
 Key: HDDS-416
 URL: https://issues.apache.org/jira/browse/HDDS-416
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


ChunkInputStreamEntry maintains currentPosition field. This field is redundant 
and can be replaced by getPos().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-09-07 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606859#comment-16606859
 ] 

Lokesh Jain edited comment on HDDS-361 at 9/7/18 8:59 AM:
--

Patch can be submitted after HDDS-325.

[~anu] Please take a look. I have uploaded the patch.


was (Author: ljain):
Patch can be submitted after HDDS-325.

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-361.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-09-07 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-361:
-
Attachment: HDDS-361.001.patch

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-361.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-09-07 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606859#comment-16606859
 ] 

Lokesh Jain commented on HDDS-361:
--

Patch can be submitted after HDDS-325.

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-361.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-07 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606756#comment-16606756
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v9 patch. I have enabled TestBlockDeletion in this patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, HDDS-325.009.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-07 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.009.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, HDDS-325.009.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16605286#comment-16605286
 ] 

Lokesh Jain commented on HDDS-325:
--

v8 patch fixes TestCommandStatusReportHandler.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.008.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13893) DiskBalancer: no validations for Disk balancer commands

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDFS-13893:
--

Assignee: Lokesh Jain

> DiskBalancer: no validations for Disk balancer commands 
> 
>
> Key: HDFS-13893
> URL: https://issues.apache.org/jira/browse/HDFS-13893
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Harshakiran Reddy
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: newbie
>
> {{Scenario:-}}
>  
>  1 Run the Disk Balancer commands with extra arguments passing  
> {noformat} 
> hadoopclient> hdfs diskbalancer -plan hostname --thresholdPercentage 2 
> *sgfsdgfs*
> 2018-08-31 14:57:35,454 INFO planner.GreedyPlanner: Starting plan for Node : 
> hostname:50077
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Disk Volume set 
> fb67f00c-e333-4f38-a3a6-846a30d4205a Type : DISK plan completed.
> 2018-08-31 14:57:35,457 INFO planner.GreedyPlanner: Compute Plan for Node : 
> hostname:50077 took 23 ms
> 2018-08-31 14:57:35,457 INFO command.Command: Writing plan to:
> 2018-08-31 14:57:35,457 INFO command.Command: 
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> Writing plan to:
> /system/diskbalancer/2018-Aug-31-14-57-35/hostname.plan.json
> {noformat} 
> Expected Output:- 
> =
> Disk balancer commands should be fail if we pass any invalid arguments or 
> extra arguments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-397:
-
Attachment: HDDS-397.001.patch

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-397:
-
Status: Patch Available  (was: Open)

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.007.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-05 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604720#comment-16604720
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v7 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-05 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16604289#comment-16604289
 ] 

Lokesh Jain commented on HDDS-358:
--

[~anu] Thanks for updating the patch! v3 patch looks good to me. +1

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch, 
> HDDS-358.003.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-03 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16602256#comment-16602256
 ] 

Lokesh Jain commented on HDDS-358:
--

[~anu] Can you please rebase the patch? 

The patch looks good to me. Please find my comments below.
 # KeyDeletingService - We can have the logs in KeyDeletingTask and can convert 
them to debug instead. We can have the log for the case when block deletion 
result from SCM is a failure. We can also have the log for the number of keys 
which are being deleted by the service.
 # We also need to start the KeyDeletingService and the block deletion tests. 
We can do it as part of separate Jira though.
 # OmMetadataManagerImpl:50 - Star import

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-397) Handle deletion for keys with no blocks

2018-09-03 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-397:


 Summary: Handle deletion for keys with no blocks
 Key: HDDS-397
 URL: https://issues.apache.org/jira/browse/HDDS-397
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-386) Create a datanode cli

2018-08-30 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-386:


 Summary: Create a datanode cli
 Key: HDDS-386
 URL: https://issues.apache.org/jira/browse/HDDS-386
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


For block deletion we need a debug cli on the datanode to know the state of the 
containers and number of chunks present in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-386) Create a datanode debug cli

2018-08-30 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-386:
-
Summary: Create a datanode debug cli  (was: Create a datanode cli)

> Create a datanode debug cli
> ---
>
> Key: HDDS-386
> URL: https://issues.apache.org/jira/browse/HDDS-386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> For block deletion we need a debug cli on the datanode to know the state of 
> the containers and number of chunks present in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-29 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16596352#comment-16596352
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v6 patch which addresses [~elek] comments. I have modified the 
test case in TestBlockDeletion to verify the event to be fired by watcher.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, HDDS-325.006.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-29 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.006.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, HDDS-325.006.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-27 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593257#comment-16593257
 ] 

Lokesh Jain commented on HDDS-325:
--

{quote}I am not sure but I think in RetriableDatanodeEventWatcher.onTimeout we 
need to send the message to SCMEvents.DATANODE_COMMAND and not 
SCMEvents.RETRIABLE_DATANODE_COMMAND (A unit test would help to decide this 
question...)
{quote}
If we send DATANODE_COMMAND then this is never retried on timeout. Therefore I 
am firing the RETRIABLE_DATANODE_COMMAND. Although this will lead to infinite 
number of retries because currently we are not limiting the number of retries.
{quote}Let's say we have two kind of commands :
new CommandForDatanode<>(dnId, new DeleteBlocksCommand) 
new CommandForDatanode<>(dnId, new EatBananaCommand)

Both could be sent to the SCMEvents.RETRIABLE_DATANODE_COMMAND for 
RetriableDatanodeEventWatcher (and for the scmNodeManager) and they could 
handle both of them.
{quote}
The problem is CMD_STATUS_REPORT is a collection of command status from the 
datanode. Each of these command status prevent timeout for specific events. 
Therefore we will need to watch either the events fired by 
CommandStatusReportHandler or if we watch CMD_STATUS_REPORT then we will need 
to change the event watcher logic for watching event which combines many 
replies. The problem I was mentioning occurred if we watch events fired by 
CommandStatusReportHandler.
{quote}we can create a builder (EventHandler.watchEvents is almost like a 
builder).
{quote}
I like this idea. We can very easily separate start events and end events using 
a builder. Further we can create a way to add our own timeout or completion 
logic for such events rather than using a default one. With logic I mean we can 
provide a custom function which handles these events in event queue. This way 
we can easily handle CMD_STATUS_REPORT.

I will upload another patch for handling other comments and try adding a unit 
test. I agree we do not need the watchEvents function for now and can do it as 
part of another Jira when required.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589207#comment-16589207
 ] 

Lokesh Jain commented on HDDS-325:
--

[~elek] I have uploaded rebased v5 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-22 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.005.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-21 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588416#comment-16588416
 ] 

Lokesh Jain commented on HDDS-265:
--

[~GeLiXin] Thanks for working on this! +1 for v5 patch.

> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch, HDDS-265.003.patch, HDDS-265.004.patch, HDDS-265.005.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-20 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Status: Patch Available  (was: Open)

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585674#comment-16585674
 ] 

Lokesh Jain commented on HDDS-265:
--

[~GeLiXin] Thanks for updating the patch! v3 patch looks good to me. I have few 
very minor comments.
 # ContainerData:36,40 - unused imports
 # ContainerReport:23 - unused import
 # ContainerSet:24,45 - unused import

> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch, HDDS-265.003.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-20 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585483#comment-16585483
 ] 

Lokesh Jain commented on HDDS-353:
--

The failed tests pass locally.

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch, HDDS-353.002.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585105#comment-16585105
 ] 

Lokesh Jain commented on HDDS-353:
--

v2 patch fixes the test failure. I have also changed 
HddsServerUtil#getScmHeartbeatInterval to get time duration in milli seconds. 
This is required as in some of the block deletion tests the heartbeat interval 
is set in milli seconds. This change brought in some other changes as well.

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch, HDDS-353.002.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-353:
-
Attachment: HDDS-353.002.patch

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch, HDDS-353.002.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585038#comment-16585038
 ] 

Lokesh Jain edited comment on HDDS-325 at 8/19/18 6:42 AM:
---

[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. v4 patch can be applied after applying the patch in HDDS-353. The 
changes related to event can be found in classes RetriableDatanodeEventWatcher, 
SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.


was (Author: ljain):
[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. The changes related to event can be found in classes 
RetriableDatanodeEventWatcher, SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16585038#comment-16585038
 ] 

Lokesh Jain commented on HDDS-325:
--

[~elek] Thanks for reviewing the patch! I have uploaded v4 patch based on our 
discussion. The changes related to event can be found in classes 
RetriableDatanodeEventWatcher, SCMEvents and StorageContainerManager.

I had to add the changes in EventWatcher as well. The changes add capability of 
watching multiple events in a single watcher. The reason I had to add the 
change was because currently we have a single event type 
RETRIABLE_DATANODE_COMMAND for datanode command events which need to be 
retried. If we create multiple event watchers for the same start event and 
different completion events then we will be adding multiple handlers(via 
multiple event watchers) for the same event type. This would lead to multiple 
handlers retrying the start event.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.004.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Status: Open  (was: Patch Available)

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-18 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16584898#comment-16584898
 ] 

Lokesh Jain commented on HDDS-353:
--

Currently KeyDeletingService has been disabled so I have disabled few tests. I 
ran the tests locally after enabling the KeyDeletingService.

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-18 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-353:
-
Status: Patch Available  (was: Open)

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-18 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-353:
-
Attachment: HDDS-353.001.patch

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-353.001.patch
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-353) Multiple delete Blocks tests are failing consistetly

2018-08-16 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain reassigned HDDS-353:


Assignee: Lokesh Jain

> Multiple delete Blocks tests are failing consistetly
> 
>
> Key: HDDS-353
> URL: https://issues.apache.org/jira/browse/HDDS-353
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Shashikant Banerjee
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> As per the test reports here:
> [https://builds.apache.org/job/PreCommit-HDDS-Build/771/testReport/], 
> following tests are failing:
> 1 . TestStorageContainerManager#testBlockDeletionTransactions
> 2. TestStorageContainerManager#testBlockDeletingThrottling
> 3.TestBlockDeletion#testBlockDeletion



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-08-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578324#comment-16578324
 ] 

Lokesh Jain commented on HDDS-265:
--

[~GeLiXin] Thanks for updating the patch! Please find my comments below.
 # We can use default keyword to provide a default implementation.
{code:java}
default boolean isValidContainerType(ContainerProtos.ContainerType type) {
  return false;
}{code}
We can then remove the implementation provided in the subclasses.
 # RandomContainerDeletionChoosingPolicy:59,60 - The change should be reverted. 
We can use KeyValueContainerData cast to print the pending blocks number.
 # There is a compilation failure after applying the patch. The failure occurs 
for ContainerSet class. I think we can have a getContainerReport api in 
Container.java?

> Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to 
> KeyValueContainerData
> -
>
> Key: HDDS-265
> URL: https://issues.apache.org/jira/browse/HDDS-265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Hanisha Koneru
>Assignee: LiXin Ge
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-265.000.patch, HDDS-265.001.patch, 
> HDDS-265.002.patch
>
>
> "numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
> KeyValueContainers. As such they should be moved to KeyValueContainerData 
> from ContainerData.
> ContainerReport should also be refactored to take in this change. 
> Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-08-13 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.003.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-08-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578272#comment-16578272
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v3 patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-308) SCM should identify a container with pending deletes using container reports

2018-08-12 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16577467#comment-16577467
 ] 

Lokesh Jain commented on HDDS-308:
--

Uploaded rebased v7 patch.

> SCM should identify a container with pending deletes using container reports
> 
>
> Key: HDDS-308
> URL: https://issues.apache.org/jira/browse/HDDS-308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-308.001.patch, HDDS-308.002.patch, 
> HDDS-308.003.patch, HDDS-308.004.patch, HDDS-308.005.patch, 
> HDDS-308.006.patch, HDDS-308.007.patch
>
>
> SCM should fire an event when it finds using container report that a 
> container's deleteTransactionID does not match SCM's deleteTransactionId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >