[jira] [Updated] (HDDS-4405) Proxy failover is logging with out trying all OMS

2020-10-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4405:
-
Reporter: umamaheswararao  (was: Bharat Viswanadham)

> Proxy failover is logging with out trying all OMS
> -
>
> Key: HDDS-4405
> URL: https://issues.apache.org/jira/browse/HDDS-4405
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: umamaheswararao
>Assignee: Bharat Viswanadham
>Priority: Major
>
> {code:java}
> [root@uma-1 ~]# sudo -u hdfs hdfs dfs -ls o3fs://bucket.volume.ozone1/
> 20/10/28 23:37:50 INFO retry.RetryInvocationHandler: 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3.
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:198)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:123)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:73)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:113)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
> , while invoking $Proxy10.submitRequest over 
> {om1=nodeId=om1,nodeAddress=uma-1.uma.root.hwx.site:9862, 
> om3=nodeId=om3,nodeAddress=uma-3.uma.root.hwx.site:9862, 
> om2=nodeId=om2,nodeAddress=uma-2.uma.root.hwx.site:9862} after 1 failover 
> attempts. Trying to failover immediately.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4405) Proxy failover is logging with out trying all OMS

2020-10-28 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4405:


 Summary: Proxy failover is logging with out trying all OMS
 Key: HDDS-4405
 URL: https://issues.apache.org/jira/browse/HDDS-4405
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


{code:java}
[root@uma-1 ~]# sudo -u hdfs hdfs dfs -ls o3fs://bucket.volume.ozone1/
20/10/28 23:37:50 INFO retry.RetryInvocationHandler: 
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3.
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:198)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:123)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:73)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:113)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
, while invoking $Proxy10.submitRequest over 
{om1=nodeId=om1,nodeAddress=uma-1.uma.root.hwx.site:9862, 
om3=nodeId=om3,nodeAddress=uma-3.uma.root.hwx.site:9862, 
om2=nodeId=om2,nodeAddress=uma-2.uma.root.hwx.site:9862} after 1 failover 
attempts. Trying to failover immediately.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4117) Normalize Keypath for listKeys.

2020-10-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4117:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

>  Normalize Keypath for listKeys.
> 
>
> Key: HDDS-4117
> URL: https://issues.apache.org/jira/browse/HDDS-4117
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
> Keyname.
> When listKeys uses given keyName(not normalized key path) as prefix and 
> Starkey the list-keys will return empty result.
> Similar to HDDS-4102, we should normalize startKey and keyPrefix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3995) s3g met NPE exception while write file by multiPartUpload

2020-10-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3995.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> s3g met NPE exception while write file by multiPartUpload
> -
>
> Key: HDDS-3995
> URL: https://issues.apache.org/jira/browse/HDDS-3995
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Affects Versions: 0.5.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available, s3g
> Fix For: 1.1.0
>
>
> *strong text*The following is the exception
>  
> 2020-07-20 17:26:17,692 
> [java.util.concurrent.ThreadPoolExecutor$Worker@2b70f372[State = -1, empty 
> queue]] ERROR org.apache.hadoop.hdds.scm.storage.BlockOutputStream: writing 
> chunk failed 104545451305649559_chunk_1 blockID conID: 284442 locID: 
> 104545451305649559 bcsId: 0 with exception 
> org.apache.ratis.protocol.AlreadyClosedException: 
> SlidingWindow$Client:client-96F1E0530C66->RAFT is closed.
> 2020-07-20 17:26:17,700 
> [java.util.concurrent.ThreadPoolExecutor$Worker@2b70f372[State = -1, empty 
> queue]] ERROR org.apache.hadoop.hdds.scm.storage.BlockOutputStream: writing 
> chunk failed 104545451305649559_chunk_2 blockID conID: 284442 locID: 
> 104545451305649559 bcsId: 0 with exception 
> org.apache.ratis.protocol.AlreadyClosedException: 
> SlidingWindow$Client:client-96F1E0530C66->RAFT is closed.
> 2020-07-20 17:26:17,700 
> [java.util.concurrent.ThreadPoolExecutor$Worker@2b70f372[State = -1, empty 
> queue]] ERROR org.apache.hadoop.hdds.scm.storage.BlockOutputStream: writing 
> chunk failed 104545451305649559_chunk_3 blockID conID: 284442 locID: 
> 104545451305649559 bcsId: 0 with exception 
> org.apache.ratis.protocol.AlreadyClosedException: 
> SlidingWindow$Client:client-96F1E0530C66->RAFT is closed.
> 2020-07-20 17:26:17,726 [qtp2131952342-276] WARN 
> org.apache.hadoop.ozone.client.io.KeyOutputStream: Encountered exception 
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.AlreadyClosedException: 
> SlidingWindow$Client:client-96F1E0530C66->RAFT is closed. on the pipeline 
> Pipeline[ Id: 58e4eef0-d2b1-4246-a152-83a28012848f, Nodes: 
> 38103930-07b9-4434-a657-621276c65683\{ip: 9.180.20.47, host: 
> host-9-180-20-47, networkLocation: /rack1, certSerialId: 
> null}207b98d9-ad64-45a8-940f-504b514feff5\{ip: 9.180.21.88, host: 
> host-9-180-21-88, networkLocation: /rack2, certSerialId: 
> null}d3336357-8920-4a4e-a12f-e57da1640c4d\{ip: 9.180.20.94, host: 
> host-9-180-20-94, networkLocation: /rack1, certSerialId: null}, Type:RATIS, 
> Factor:THREE, State:OPEN, leaderId:207b98d9-ad64-45a8-940f-504b514feff5, 
> CreationTimestamp2020-07-07T15:11:10.887Z]. The last committed block length 
> is 0, uncommitted data length is 10485760 retry count 0
> 2020-07-20 17:26:17,726 [qtp2131952342-276] INFO 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool: Allocating 
> block with ExcludeList \{datanodes = [], containerIds = [], pipelineIds = 
> [PipelineID=58e4eef0-d2b1-4246-a152-83a28012848f]}
> 2020-07-20 17:26:18,238 [qtp2131952342-276] WARN 
> org.eclipse.jetty.server.HttpChannel: /bucketmbl/114.dat
> javax.servlet.ServletException: javax.servlet.ServletException: 
> java.lang.NullPointerException
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:162)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>  at org.eclipse.jetty.server.Server.handle(Server.java:500)
>  at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
>  at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>  at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
>  at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
>  at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
>  at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
>  at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
>  at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388)
>  at 
> 

[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-14 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214285#comment-17214285
 ] 

Bharat Viswanadham commented on HDDS-4164:
--

This issue is fixed by HDDS-4262.

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 1.1.0
>
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-14 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214284#comment-17214284
 ] 

Bharat Viswanadham commented on HDDS-4164:
--

Thank You, [~ljain] and [~weichiu] for the testing and confirmation.

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 1.1.0
>
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4164.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 1.1.0
>
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4338) SCM web UI banner shows "HDFS SCM"

2020-10-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4338:
-
Labels: newbie  (was: )

> SCM web UI banner shows "HDFS SCM"
> --
>
> Key: HDDS-4338
> URL: https://issues.apache.org/jira/browse/HDDS-4338
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: Screen Shot 2020-10-12 at 6.42.31 PM.png
>
>
> !Screen Shot 2020-10-12 at 6.42.31 PM.png!  Let's call it Ozone SCM, shall we?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213347#comment-17213347
 ] 

Bharat Viswanadham commented on HDDS-4164:
--

Annotation of methods is just for representational purpose, when I digged in to 
Rpc code this annotation presence check is happened, but not used.

And also in OM Client protocol, we have only a single RPC method 
*submitRequest*, so marking with @Idempotent/ @AT_MOST_ONCE is for 
representational purpose, and it will not help in any way. So, planned to skip 
marking the methods with annotation.

 

In NN, all methods which are non-idempotent, they use the cache, in ozone all 
write requests use Ratis Retry cache, so it means all write requests use cache 
by default.

 

 

 

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 

[jira] [Commented] (HDDS-4339) Ozone S3 gateway throws NPE with goofys

2020-10-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213335#comment-17213335
 ] 

Bharat Viswanadham commented on HDDS-4339:
--

This looks similar to HDDS-3843 where when auth header is missing we fail with 
NPE.

 

[https://github.com/apache/hadoop-ozone/pull/1110#issuecomment-648405855]

 

We need to match with error code with AWS. Right now looks like we just print 
it.

> Ozone S3 gateway throws NPE with goofys
> ---
>
> Key: HDDS-4339
> URL: https://issues.apache.org/jira/browse/HDDS-4339
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Li Cheng
>Priority: Blocker
> Attachments: image-2020-10-13-15-23-49-864.png
>
>
> Configured goofys and s3g on different hosts and Fiotest writes files on the 
> goofys mount point. Export AWS secrets on the s3g host. See a bunch of NPE in 
> s3g logs.
>  # Looks like missing AWS auth header could cause NPE. Looks like 
> AWSSignatureProcessor.init() doesn't handle header missing which causes NPE.
>  # Why it's missing AWS auth header is also unknown.
> Note that there are files that have been successfully written into Ozone via 
> goofys, while not all of them are succeeded.  
>  
> 2020-10-13 11:18:43,425 [qtp1686100174-1238] ERROR 
> org.apache.hadoop.ozone.s3.OzoneClientProducer: Error: 
> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke public 
> void org.apache.hadoop.ozone.s3.AWSSignatureProcessor.init() throws 
> java.lang.Exception on 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor@5535155b
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:99)
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:80)
>  at 
> org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
>  at 
> org.glassfish.jersey.ext.cdi1x.internal.CdiComponentProvider$InjectionManagerInjectedCdiTarget.postConstruct(CdiComponentProvider.java:887)
>  at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:162)
>  at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.get(ContextualInstanceStrategy.java:177)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
>  at 
> org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:125)
>  at 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor$Proxy$_$$_WeldClientProxy.getAwsAccessId(Unknown
>  Source)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:79)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
>  at 
> org.jboss.weld.injection.producer.ProducerMethodProducer.produce(ProducerMethodProducer.java:100)
>  at 
> org.jboss.weld.injection.producer.AbstractMemberProducer.produce(AbstractMemberProducer.java:161)
>  at 
> org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:180)
>  at 
> org.jboss.weld.context.unbound.DependentContextImpl.get(DependentContextImpl.java:70)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:785)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:885)
>  at 
> org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
>  at org.jboss.weld.util.Beans.injectBoundFields(Beans.java:358)
>  at org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:369)
>  at 
> org.jboss.weld.injection.producer.ResourceInjector$1.proceed(ResourceInjector.java:70)
>  at 
> org.jboss.weld.injection.InjectionContextImpl.run(InjectionContextImpl.java:48)
>  at 
> 

[jira] [Updated] (HDDS-4327) Potential resource leakage using BatchOperation

2020-10-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4327:
-
Status: Patch Available  (was: In Progress)

> Potential resource leakage using BatchOperation
> ---
>
> Key: HDDS-4327
> URL: https://issues.apache.org/jira/browse/HDDS-4327
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> there are a number of places in the code where BatchOperation is used but not 
> closed. As a best practice, better to close them explicitly.
> I have a stress test code that uses BatchOperation to insert into OM rocksdb. 
> Without closing BatchOperation explicitly, the process crashes after just a 
> few minutes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4266) CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4266:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> CreateFile : store parent dir entries into DirTable and file entry into 
> separate FileTable
> --
>
> Key: HDDS-4266
> URL: https://issues.apache.org/jira/browse/HDDS-4266
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> This task is to handle #createFile ofs client request. Here the idea is to 
> store all the missing parents in the {{keyname}} into 'DirTable' and file 
> into 'FileTable'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4122) Implement OM Delete Expired Open Key Request and Response

2020-10-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4122.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Implement OM Delete Expired Open Key Request and Response
> -
>
> Key: HDDS-4122
> URL: https://issues.apache.org/jira/browse/HDDS-4122
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Create an OM request and response that allows moving open keys from the open 
> key table to the deleted table in OM HA. The request portion of this 
> operation, which updates the open key table cache, will use a bucket lock.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4339) Ozone S3 gateway throws NPE with goofys

2020-10-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213186#comment-17213186
 ] 

Bharat Viswanadham edited comment on HDDS-4339 at 10/13/20, 3:46 PM:
-

 
{quote}Export AWS secrets on the s3g host. See a bunch of NPE in s3g logs.
{quote}
#

 
{quote}Looks like missing AWS auth header could cause NPE. Looks like 
AWSSignatureProcessor.init() doesn't handle header missing which causes NPE.
{quote} # 
{quote}Why it's missing AWS auth header is also unknown.
{quote}

Exporting aws credentials should be done on the client node where we run 
goofys? Is this the reason for auth header Null in S3G?

 


was (Author: bharatviswa):
{quote}Export AWS secrets on the s3g host. See a bunch of NPE in s3g logs.
{quote} # 
{quote}Looks like missing AWS auth header could cause NPE. Looks like 
AWSSignatureProcessor.init() doesn't handle header missing which causes 
NPE.{quote}
 # 
{quote}Why it's missing AWS auth header is also unknown.{quote}

Exporting aws credentials should be done on the client node where we run goofys?

 

> Ozone S3 gateway throws NPE with goofys
> ---
>
> Key: HDDS-4339
> URL: https://issues.apache.org/jira/browse/HDDS-4339
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Li Cheng
>Priority: Blocker
> Attachments: image-2020-10-13-15-23-49-864.png
>
>
> Configured goofys and s3g on different hosts and Fiotest writes files on the 
> goofys mount point. Export AWS secrets on the s3g host. See a bunch of NPE in 
> s3g logs.
>  # Looks like missing AWS auth header could cause NPE. Looks like 
> AWSSignatureProcessor.init() doesn't handle header missing which causes NPE.
>  # Why it's missing AWS auth header is also unknown.
> Note that there are files that have been successfully written into Ozone via 
> goofys, while not all of them are succeeded.  
>  
> 2020-10-13 11:18:43,425 [qtp1686100174-1238] ERROR 
> org.apache.hadoop.ozone.s3.OzoneClientProducer: Error: 
> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke public 
> void org.apache.hadoop.ozone.s3.AWSSignatureProcessor.init() throws 
> java.lang.Exception on 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor@5535155b
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:99)
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:80)
>  at 
> org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
>  at 
> org.glassfish.jersey.ext.cdi1x.internal.CdiComponentProvider$InjectionManagerInjectedCdiTarget.postConstruct(CdiComponentProvider.java:887)
>  at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:162)
>  at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.get(ContextualInstanceStrategy.java:177)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
>  at 
> org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:125)
>  at 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor$Proxy$_$$_WeldClientProxy.getAwsAccessId(Unknown
>  Source)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:79)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
>  at 
> org.jboss.weld.injection.producer.ProducerMethodProducer.produce(ProducerMethodProducer.java:100)
>  at 
> org.jboss.weld.injection.producer.AbstractMemberProducer.produce(AbstractMemberProducer.java:161)
>  at 
> org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:180)
>  at 
> org.jboss.weld.context.unbound.DependentContextImpl.get(DependentContextImpl.java:70)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> 

[jira] [Commented] (HDDS-4339) Ozone S3 gateway throws NPE with goofys

2020-10-13 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17213186#comment-17213186
 ] 

Bharat Viswanadham commented on HDDS-4339:
--

{quote}Export AWS secrets on the s3g host. See a bunch of NPE in s3g logs.
{quote} # 
{quote}Looks like missing AWS auth header could cause NPE. Looks like 
AWSSignatureProcessor.init() doesn't handle header missing which causes 
NPE.{quote}
 # 
{quote}Why it's missing AWS auth header is also unknown.{quote}

Exporting aws credentials should be done on the client node where we run goofys?

 

> Ozone S3 gateway throws NPE with goofys
> ---
>
> Key: HDDS-4339
> URL: https://issues.apache.org/jira/browse/HDDS-4339
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Li Cheng
>Priority: Blocker
> Attachments: image-2020-10-13-15-23-49-864.png
>
>
> Configured goofys and s3g on different hosts and Fiotest writes files on the 
> goofys mount point. Export AWS secrets on the s3g host. See a bunch of NPE in 
> s3g logs.
>  # Looks like missing AWS auth header could cause NPE. Looks like 
> AWSSignatureProcessor.init() doesn't handle header missing which causes NPE.
>  # Why it's missing AWS auth header is also unknown.
> Note that there are files that have been successfully written into Ozone via 
> goofys, while not all of them are succeeded.  
>  
> 2020-10-13 11:18:43,425 [qtp1686100174-1238] ERROR 
> org.apache.hadoop.ozone.s3.OzoneClientProducer: Error: 
> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke public 
> void org.apache.hadoop.ozone.s3.AWSSignatureProcessor.init() throws 
> java.lang.Exception on 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor@5535155b
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:99)
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:80)
>  at 
> org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
>  at 
> org.glassfish.jersey.ext.cdi1x.internal.CdiComponentProvider$InjectionManagerInjectedCdiTarget.postConstruct(CdiComponentProvider.java:887)
>  at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:162)
>  at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.get(ContextualInstanceStrategy.java:177)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
>  at 
> org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:125)
>  at 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor$Proxy$_$$_WeldClientProxy.getAwsAccessId(Unknown
>  Source)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:79)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
>  at 
> org.jboss.weld.injection.producer.ProducerMethodProducer.produce(ProducerMethodProducer.java:100)
>  at 
> org.jboss.weld.injection.producer.AbstractMemberProducer.produce(AbstractMemberProducer.java:161)
>  at 
> org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:180)
>  at 
> org.jboss.weld.context.unbound.DependentContextImpl.get(DependentContextImpl.java:70)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:785)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:885)
>  at 
> org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
>  at org.jboss.weld.util.Beans.injectBoundFields(Beans.java:358)
>  at org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:369)
>  at 
> org.jboss.weld.injection.producer.ResourceInjector$1.proceed(ResourceInjector.java:70)
>  at 
> 

[jira] [Commented] (HDDS-4327) Potential resource leakage using BatchOperation

2020-10-12 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212740#comment-17212740
 ] 

Bharat Viswanadham commented on HDDS-4327:
--

Thank You [~weichiu] for the catch.

I will post a PR to fix the issue.

> Potential resource leakage using BatchOperation
> ---
>
> Key: HDDS-4327
> URL: https://issues.apache.org/jira/browse/HDDS-4327
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> there are a number of places in the code where BatchOperation is used but not 
> closed. As a best practice, better to close them explicitly.
> I have a stress test code that uses BatchOperation to insert into OM rocksdb. 
> Without closing BatchOperation explicitly, the process crashes after just a 
> few minutes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-12 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17212736#comment-17212736
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

{quote}The performance impact of volume lock hasn’t tested before and it may 
also be within our tolerance(In-memory operations can be really fast). This 
should be the easiest way to fix this bug by far.
{quote}
Agreed using volume lock and then doing calculation will solve the correctness 
issue.

If we don't have any smart solution until then we can fix this by using volume 
lock and update bytes used.

> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4329) Expose Ratis retry config cache in OM

2020-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4329:


 Summary: Expose Ratis retry config cache in OM
 Key: HDDS-4329
 URL: https://issues.apache.org/jira/browse/HDDS-4329
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to expose config Ratis retry cache duration in OM, and also choose 
a sensible default value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211269#comment-17211269
 ] 

Bharat Viswanadham edited comment on HDDS-4308 at 10/9/20, 6:47 PM:


{quote}I think the better solution here is copy of a new volumeArgs object in 
the Request before addResponseToDoubleBuffer. Of course, during the copy 
process, we need to lock the object volumeArgs in case other operations change 
it.
{quote}
This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.


was (Author: bharatviswa):
I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.




> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211269#comment-17211269
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

I think the better solution here is copy of a new volumeArgs object in the 
Request before addResponseToDoubleBuffer. Of course, during the copy process, 
we need to lock the object volumeArgs in case other operations change it.

This might not be complete i believe, If 2 threads acquire copy object and if 
they update outside lock we have issue again. I think the whole operation 
should be performed under volume lock. (As we update in-memory it should be 
quick) But i agree that it might have performance impact across buckets when 
key writes happen.

Question: With your tests how much perf impact has been observed?

cc [~arp] For any more thoughts on this issue.




> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211254#comment-17211254
 ] 

Bharat Viswanadham edited comment on HDDS-4308 at 10/9/20, 6:17 PM:


As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.





was (Author: bharatviswa):
As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.

Coming to the solution, I think we can use read lock and acquire volume Object 
using Table#get API and update bytes used and submit this object to double 
buffer. (In this way, we might not see volume lock contention, as we acquire 
write lock on volume during create/delete, so this might no affect performance.

Let me know your thoughts?



> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211254#comment-17211254
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

As mentioned in the scenario this can happen when double buffer flush is not 
completed and other transaction requests updating bytes used for the same key.
When you run with a load this can be seen, as previously we have observed 
ConcurrentModificatinException because of using the same cache object in the 
double buffer. HDDS-2322

But here we might not see an error, but here this can happen bytesUsed will be 
updated wrongly.

Coming to the solution, I think we can use read lock and acquire volume Object 
using Table#get API and update bytes used and submit this object to double 
buffer. (In this way, we might not see volume lock contention, as we acquire 
write lock on volume during create/delete, so this might no affect performance.

Let me know your thoughts?



> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4164) OM client request fails with "failed to commit as key is not found in OpenKey table"

2020-10-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17211251#comment-17211251
 ] 

Bharat Viswanadham commented on HDDS-4164:
--

HDDS-4262 is the root cause for this issue, when leader changes, all pending 
requests are replied from the old leader with NOT LEADER and replied back. As 
previously, we used new clientID and callID, Ratis server is not able to 
distinguish retry of request, with the fix from HDDS-4262 I ran a freon test, I 
don't see now Key_NOT_FOUND.

[~ljain] Once after your confirmation, will close this bug.

> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> 
>
> Key: HDDS-4164
> URL: https://issues.apache.org/jira/browse/HDDS-4164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Lokesh Jain
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> OM client request fails with "failed to commit as key is not found in OpenKey 
> table"
> {code:java}
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28868 $Proxy17.submitRequest over 
> nodeId=om3,nodeAddress=vc1330.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28870 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:53 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28869 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28871 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28872 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28866 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28867 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28874 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 WARN retry.RetryInvocationHandler: A failover has occurred 
> since the start of call #28875 $Proxy17.submitRequest over 
> nodeId=om1,nodeAddress=vc1325.halxg.cloudera.com:9862
> 20/08/28 03:21:54 ERROR freon.BaseFreonGenerator: Error on executing task 
> 14424
> KEY_NOT_FOUND org.apache.hadoop.ozone.om.exceptions.OMException: Failed to 
> commit key, as /vol1/bucket1/akjkdz4hoj/14424/104766512182520809entry is not 
> found in the OpenKey table
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
> at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.commitKey(OzoneManagerProtocolClientSideTranslatorPB.java:650)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntryPool.commitKey(BlockOutputStreamEntryPool.java:306)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:514)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.close(OzoneOutputStream.java:60)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.lambda$createKey$0(OzoneClientKeyGenerator.java:118)
> at com.codahale.metrics.Timer.time(Timer.java:101)
> at 
> org.apache.hadoop.ozone.freon.OzoneClientKeyGenerator.createKey(OzoneClientKeyGenerator.java:113)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:178)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:167)
> at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:150)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-10-09 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4327) Potential resource leakage using BatchOperation

2020-10-08 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17210393#comment-17210393
 ] 

Bharat Viswanadham commented on HDDS-4327:
--

[~weichiu]
Are there any places in code you have observed that batch is not called within 
try enclosed resource?
Because in OM we use this, and we have used as   
{code:java}
try(BatchOperation batchOperation = omMetadataManager.getStore()
  .initBatchOperation())
{code}



> Potential resource leakage using BatchOperation
> ---
>
> Key: HDDS-4327
> URL: https://issues.apache.org/jira/browse/HDDS-4327
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> there are a number of places in the code where BatchOperation is used but not 
> closed. As a best practice, better to close them explicitly.
> I have a stress test code that uses BatchOperation to insert into OM rocksdb. 
> Without closing BatchOperation explicitly, the process crashes after just a 
> few minutes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4308:
-
Description: 
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the updated value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

But after T2, the value should be 7000, so we have DB in an incorrect state.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 

  was:
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the updated value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

But after T2, the value should be 7000, so we have corrupted the DB.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 


> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4308:
-
Description: 
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the updated value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

But after T2, the value should be 7000, so we have corrupted the DB.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 

  was:
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the updated value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 


> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have corrupted the DB.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4308:
-
Description: 
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the updated value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 

  was:
Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the processed value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 


> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4308:
-
Priority: Blocker  (was: Major)

> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the processed value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208208#comment-17208208
 ] 

Bharat Viswanadham commented on HDDS-4308:
--

cc [~micahzhao]
For ur comments on this issue.

> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the processed value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4308) Fix issue with quota update

2020-10-05 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4308:


 Summary: Fix issue with quota update
 Key: HDDS-4308
 URL: https://issues.apache.org/jira/browse/HDDS-4308
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Currently volumeArgs using getCacheValue and put the same object in 
doubleBuffer, this might cause issue.

Let's take the below scenario:

InitialVolumeArgs quotaBytes -> 1
1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
volumeArgs to DoubleBuffer.
2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
double buffer.

*Now at the end of flushing these transactions, our DB should have 7000 as 
bytes used.*

Now T1 is picked by double Buffer and when it commits, and as it uses cached 
Object put into doubleBuffer, it flushes to DB with the processed value from 
T2(As it is a cache object) and update DB with bytesUsed as 7000.

And now OM has restarted, and only DB has transactions till T1. (We get this 
info from TransactionInfo Table(https://issues.apache.org/jira/browse/HDDS-3685)

Now T2 is again replayed, as it is not committed to DB, now DB will be again 
subtracted with 2000, and now DB will have 5000.

Issue here:
1. As we use a cached object and put the same cached object into double buffer 
this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3966) Intermittent crash in TestOMRatisSnapshots

2020-10-01 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-3966:


Assignee: Aryan Gupta

> Intermittent crash in TestOMRatisSnapshots
> --
>
> Key: HDDS-3966
> URL: https://issues.apache.org/jira/browse/HDDS-3966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Attila Doroszlai
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> TestOMRatisSnapshots was recently enabled and is crashing intermittently:
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1690/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1710/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/15/1713/it-hdds-om



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3966) Intermittent crash in TestOMRatisSnapshots

2020-10-01 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3966.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Intermittent crash in TestOMRatisSnapshots
> --
>
> Key: HDDS-3966
> URL: https://issues.apache.org/jira/browse/HDDS-3966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> TestOMRatisSnapshots was recently enabled and is crashing intermittently:
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1690/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1710/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/15/1713/it-hdds-om



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2949) mkdir : store directory entries in a separate table

2020-10-01 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2949:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> mkdir : store directory entries in a separate table
> ---
>
> Key: HDDS-2949
> URL: https://issues.apache.org/jira/browse/HDDS-2949
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: backward-incompatible, pull-request-available
> Fix For: 1.1.0
>
>
> As of HDDS-2940, all the directories from the path prefix get created as 
> entries in the key table. as per the namespace proposal attached to 
> HDDS-2939, directory entries need to be stored in a separate "directory" 
> table. Files will continue to be stored in the key table, which can be 
> thought of as the "file" table.
> The advantage of a separate directory table is to make directory lookup more 
> efficient - the entire table would fit into memory for a typical file based 
> dataset. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4251) Update Ratis version to latest snapshot

2020-09-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4251.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Update Ratis version to latest snapshot
> ---
>
> Key: HDDS-4251
> URL: https://issues.apache.org/jira/browse/HDDS-4251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> This Jira aims to update ozone with latest Ratis snapshot which has a 
> critical fix for OM HA - RATIS-1025.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4292) Ozone Client not working with Hadoop Version < 3.2

2020-09-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4292.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Ozone Client not working with Hadoop Version  < 3.2
> ---
>
> Key: HDDS-4292
> URL: https://issues.apache.org/jira/browse/HDDS-4292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> HDDS-3560 created new ProxyInfo object in case of IllegalAccessError 
> exception. But, it does not return the new instance and causes NPE in Hadoop 
> versions < 3.2
> {code:java}
> 20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
> protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: 
> Couldn't create RpcClient protocol exception:java.lang.NullPointerException 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
>  at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
>  at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) 
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
>  at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
> org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
> org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
> RpcClient protocol
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4292) Ozone Client not working with Hadoop Version < 3.2

2020-09-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4292:
-
Component/s: (was: Ozone Manager)
 Ozone Client

> Ozone Client not working with Hadoop Version  < 3.2
> ---
>
> Key: HDDS-4292
> URL: https://issues.apache.org/jira/browse/HDDS-4292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-3560 created new ProxyInfo object in case of IllegalAccessError 
> exception. But, it does not return the new instance and causes NPE in Hadoop 
> versions < 3.2
> {code:java}
> 20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
> protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: 
> Couldn't create RpcClient protocol exception:java.lang.NullPointerException 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
>  at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
>  at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) 
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
>  at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
> org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
> org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
> RpcClient protocol
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4292) Ozone Client not working with Hadoop Version < 3.2

2020-09-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4292:
-
Summary: Ozone Client not working with Hadoop Version  < 3.2  (was: Ozone 
Client not working with Hadoop Version < 3.2)

> Ozone Client not working with Hadoop Version  < 3.2
> ---
>
> Key: HDDS-4292
> URL: https://issues.apache.org/jira/browse/HDDS-4292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-3560 created new ProxyInfo object in case of IllegalAccessError 
> exception. But, it does not return the new instance and causes NPE in Hadoop 
> versions < 3.2
> {code:java}
> 20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
> protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: 
> Couldn't create RpcClient protocol exception:java.lang.NullPointerException 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
>  at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
>  at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) 
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
>  at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
> org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
> org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
> RpcClient protocol
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4292) Ozone Client not working with Hadoop Version < 3.2

2020-09-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4292:
-
Summary: Ozone Client not working with Hadoop Version < 3.2  (was: 
OMFailoverProxyProvider.createOMProxyIfNeeded should return a new proxy 
instance for Hadoop < 3.2)

> Ozone Client not working with Hadoop Version < 3.2
> --
>
> Key: HDDS-4292
> URL: https://issues.apache.org/jira/browse/HDDS-4292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-3560 created new ProxyInfo object in case of IllegalAccessError 
> exception. But, it does not return the new instance and causes NPE in Hadoop 
> versions < 3.2
> {code:java}
> 20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
> protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: 
> Couldn't create RpcClient protocol exception:java.lang.NullPointerException 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
>  at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
>  at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) 
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
>  at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
> org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
> org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
> RpcClient protocol
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3580) Retry Cache in OM

2020-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3580:
-
Component/s: OM HA

> Retry Cache in OM
> -
>
> Key: HDDS-3580
> URL: https://issues.apache.org/jira/browse/HDDS-3580
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM HA
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Retry Cache in OM.pdf
>
>
> The failover proxy provider for OM HA retries requests on Network 
> Timeout/Connectivity Exception. It can retry the request on the same OM or a 
> new OM. If the request has been already processed, the subsequent retry will 
> fail with error for non idempotent operations such as createVolume, 
> createBucket createKey, renameKey etc. This will cause an application 
> failure, unnecessary overhead in processing the operation again, and possibly 
> corruption in metadata during failover.
> This Jira is to discuss this issue in detail,  and also categorize OM 
> operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Component/s: OM HA

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Affects Version/s: 1.0.0

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Environment: (was: U)

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4262:
-
Status: Patch Available  (was: Open)

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
> Environment: U
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4262:


 Summary: Use ClientID and CallID from Rpc Client to detect retry 
requests
 Key: HDDS-4262
 URL: https://issues.apache.org/jira/browse/HDDS-4262
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
 Environment: U
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Use clientID and callID to uniquely identify the requests.
This will help in case when the request is retried for write requests, when the 
previous one is already processed, the previous result can be returned from the 
cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4248) Add ratis log parser to ozone script

2020-09-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4248.
--
Resolution: Invalid

> Add ratis log parser to ozone script
> 
>
> Key: HDDS-4248
> URL: https://issues.apache.org/jira/browse/HDDS-4248
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4210.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> For this same reason, we added the required information to check ACLs into 
> OMRequest.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4248) Add ratis log parser to ozone script

2020-09-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-4248:


Assignee: Bharat Viswanadham

> Add ratis log parser to ozone script
> 
>
> Key: HDDS-4248
> URL: https://issues.apache.org/jira/browse/HDDS-4248
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4248) Add ratis log parser to ozone script

2020-09-15 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4248:


 Summary: Add ratis log parser to ozone script
 Key: HDDS-4248
 URL: https://issues.apache.org/jira/browse/HDDS-4248
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4075) Retry request on different OM on AccessControlException

2020-09-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4075.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Retry request on different OM on AccessControlException
> ---
>
> Key: HDDS-4075
> URL: https://issues.apache.org/jira/browse/HDDS-4075
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM HA
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> If a client attempts a request on an OM which has not caught up with leader 
> OM and hence does have the delegation token, the request could fail with 
> AccessControlException without trying it on other OMs.
> On AccessControlException, all OMs must be tried once before the request is 
> failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4243) Get API not working from S3A filesystem with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195782#comment-17195782
 ] 

Bharat Viswanadham commented on HDDS-4243:
--

With below config changes, able to read file from Ozone S3buckets with S3A 
filesystem from OzoneS3


{code:java}
  
   fs.s3a.change.detection.version.required
   false
  
  
   fs.s3a.change.detection.mode
   none
  
{code}

>From my understanding one reason for this might be OzoneS3 does not support 
>ETag feature, in HeadObject we returned ETag as Key Modification time, in 
>GetObject ETag is not returned, and this has caused to fail with exception.

*S3ACode:*


{code:java}
default:
  // mode == Client or Server; will trigger on version failures
  // of getObjectMetadata even on server.
  return new ImmutablePair<>(true,
  new RemoteFileChangedException(uri,
  operation,
  String.format("%s "
  + CHANGE_DETECTED
  + " during %s%s."
+ " Expected %s got %s",
  getSource(), operation, positionText, revisionId, 
newRevisionId)));
{code}





> Get API not working from S3A filesystem with Ozone S3
> -
>
> Key: HDDS-4243
> URL: https://issues.apache.org/jira/browse/HDDS-4243
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: OzoneS3, S3A
>
> TroubleShooting S3A mentions S3 compatible servers that donot support Etags 
> will see this server
> Refer 
> [link|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html]
>  and look for below section content.
> Using a third-party S3 implementation that doesn’t support eTags might result 
> in the following error.
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://my-bucket/test/file.txt':
>  Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:153)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:372)
> {code:java}
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://sept14/dir1/dir2/dir3/key1': Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processNewRevision(ChangeTracker.java:275)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processMetadata(ChangeTracker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:195)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:208)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:359)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:223)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:347)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:343)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:221)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:351)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:464)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:494)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:416)
>   at 
> 

[jira] [Assigned] (HDDS-4243) Get API not working from S3A filesystem with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-4243:


Assignee: Bharat Viswanadham

> Get API not working from S3A filesystem with Ozone S3
> -
>
> Key: HDDS-4243
> URL: https://issues.apache.org/jira/browse/HDDS-4243
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: OzoneS3, S3A
>
> TroubleShooting S3A mentions S3 compatible servers that donot support Etags 
> will see this server
> Refer 
> [link|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html]
>  and look for below section content.
> Using a third-party S3 implementation that doesn’t support eTags might result 
> in the following error.
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://my-bucket/test/file.txt':
>  Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:153)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:372)
> {code:java}
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://sept14/dir1/dir2/dir3/key1': Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processNewRevision(ChangeTracker.java:275)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processMetadata(ChangeTracker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:195)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:208)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:359)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:223)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:347)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:343)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:221)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:351)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:464)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:494)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:416)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:351)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:286)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:271)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:266)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:237)
>   at 
> 

[jira] [Updated] (HDDS-4243) Get API not working from S3A filesystem with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4243:
-
Component/s: S3

> Get API not working from S3A filesystem with Ozone S3
> -
>
> Key: HDDS-4243
> URL: https://issues.apache.org/jira/browse/HDDS-4243
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Priority: Major
>
> TroubleShooting S3A mentions S3 compatible servers that donot support Etags 
> will see this server
> Refer 
> [link|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html]
>  and look for below section content.
> Using a third-party S3 implementation that doesn’t support eTags might result 
> in the following error.
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://my-bucket/test/file.txt':
>  Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:153)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:372)
> {code:java}
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://sept14/dir1/dir2/dir3/key1': Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processNewRevision(ChangeTracker.java:275)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processMetadata(ChangeTracker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:195)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:208)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:359)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:223)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:347)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:343)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:221)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:351)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:464)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:494)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:416)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:351)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:286)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:271)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:266)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:237)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
>  

[jira] [Updated] (HDDS-4243) Get API not working from S3A filesystem with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4243:
-
Labels: OzoneS3 S3A  (was: )

> Get API not working from S3A filesystem with Ozone S3
> -
>
> Key: HDDS-4243
> URL: https://issues.apache.org/jira/browse/HDDS-4243
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: OzoneS3, S3A
>
> TroubleShooting S3A mentions S3 compatible servers that donot support Etags 
> will see this server
> Refer 
> [link|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html]
>  and look for below section content.
> Using a third-party S3 implementation that doesn’t support eTags might result 
> in the following error.
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://my-bucket/test/file.txt':
>  Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:153)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:200)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
>   at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
>   at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:372)
> {code:java}
> org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
> `s3a://sept14/dir1/dir2/dir3/key1': Change detection policy requires ETag
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processNewRevision(ChangeTracker.java:275)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processMetadata(ChangeTracker.java:261)
>   at 
> org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:195)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:208)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:359)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:223)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>   at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:347)
>   at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:343)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:221)
>   at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:265)
>   at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:351)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:464)
>   at java.io.DataInputStream.read(DataInputStream.java:100)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:494)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:416)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:351)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:286)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:271)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:266)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
>   at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:237)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
>   at 

[jira] [Updated] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4209:
-
Labels: OzoneS3 S3A pull-request-available  (was: pull-request-available)

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: OzoneS3, S3A, pull-request-available
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4243) Get API not working from S3A filesystem with Ozone S3

2020-09-14 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4243:


 Summary: Get API not working from S3A filesystem with Ozone S3
 Key: HDDS-4243
 URL: https://issues.apache.org/jira/browse/HDDS-4243
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


TroubleShooting S3A mentions S3 compatible servers that donot support Etags 
will see this server

Refer 
[link|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html]
 and look for below section content.
Using a third-party S3 implementation that doesn’t support eTags might result 
in the following error.

org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
`s3a://my-bucket/test/file.txt':
 Change detection policy requires ETag
  at 
org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:153)
  at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:200)
  at 
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:346)
  at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195)
  at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
  at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
  at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
  at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
  at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193)
  at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215)
  at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:339)
  at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:372)


{code:java}
org.apache.hadoop.fs.s3a.NoVersionAttributeException: 
`s3a://sept14/dir1/dir2/dir3/key1': Change detection policy requires ETag
at 
org.apache.hadoop.fs.s3a.impl.ChangeTracker.processNewRevision(ChangeTracker.java:275)
at 
org.apache.hadoop.fs.s3a.impl.ChangeTracker.processMetadata(ChangeTracker.java:261)
at 
org.apache.hadoop.fs.s3a.impl.ChangeTracker.processResponse(ChangeTracker.java:195)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:208)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:359)
at 
org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:223)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
at 
org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:347)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:343)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:221)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:265)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:351)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:464)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:494)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:416)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:351)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:286)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:271)
at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:266)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:237)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
get: `s3a://sept14/dir1/dir2/dir3/key1': Change detection policy 

[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-14 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195679#comment-17195679
 ] 

Bharat Viswanadham commented on HDDS-4155:
--

[~elek]
I have committed this, and only change takes effect when filesystem paths 
enabled config, and when false, it will still allow. If you have any more 
comments, we can discuss in a new Jira.

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4155:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-11 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194562#comment-17194562
 ] 

Bharat Viswanadham commented on HDDS-4209:
--

Posted a draft PR, when discussing with [~avijayan] this idea has come up. 
Tested it on a cluster the fix after which I am able to create a Key after 
creating dir from S3A.

Thank You [~avijayan] and [~ppogde] for the discussion.

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-11 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194513#comment-17194513
 ] 

Bharat Viswanadham commented on HDDS-4209:
--

Looked in to S3A code, when directory path does not end with "/" it explicitly 
adds and send it.
[~elek] Might be your running with this config disabled, as when the config is 
disabled we do not normalize, so you are seeing the entry with trailing "/'.  
The problem is when the config is enabled.



> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-11 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193033#comment-17193033
 ] 

Bharat Viswanadham edited comment on HDDS-4209 at 9/11/20, 8:16 PM:


Hi [~elek]
For your test have you enabled this config *ozone.om.enable.filesystem.paths*.
And how you have created the directory. (If using ofs/o3fs, it will work fine, 
as mkdir for ofs is createDirectory not put with 0 byte file)


[root@bvoz-1 ~]# oapi create-bucket --bucket sample
{
"Location": "https://bvoz-1.bvoz.root.hwx.site:9879/sample;
}

[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://sample/dir1/dir2


[root@bvoz-1 ~]# oapi list-objects --bucket sample
{
"Contents": [
{
"LastModified": "2020-09-09T16:50:36.888Z",
"ETag": "2020-09-09T16:50:36.888Z",
"StorageClass": "STANDARD",
"Key": "dir1/",
"Size": 0
},
{
"LastModified": "2020-09-09T16:50:36.981Z",
"ETag": "2020-09-09T16:50:36.981Z",
"StorageClass": "STANDARD",
"Key": "dir1/dir2",
"Size": 0
}
]
}

As explained in the Jira description, when *mkdir -p* is run on S3A it created 
0-byte file, so it will not append "/"(because we normalize path, when this 
flag is enabled) and ozone will not consider it is a directory.



hdfs dfs -put /etc/hadoop/conf/ozone-site.xml s3a://sample/dir1/dir2/file1

Fails with the below error, as it considers /dir1/dir2 as file not directory.


{code:java}
4:54:16.945 PM  ERROR   ObjectEndpoint  
Exception occurred in PutObject
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: dir1/dir2/file1._COPYING_ as there is already file in the given path
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:584)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:688)
at 
org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:396)
at 
org.apache.hadoop.ozone.s3.endpoint.ObjectEndpoint.put(ObjectEndpoint.java:168)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
{code}






was (Author: bharatviswa):
Hi [~elek]
For your test have you enabled this config *ozone.om.enable.filesystem.paths*.
And how you have created the directory. (If using ofs/o3fs, it will work fine, 
as mkdir for ofs is createDirectory not put with 0 byte file)


[root@bvoz-1 ~]# oapi create-bucket --bucket sample
{
"Location": "https://bvoz-1.bvoz.root.hwx.site:9879/sample;
}

[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://sample/dir1/dir2


[root@bvoz-1 ~]# oapi list-objects --bucket sample
{
"Contents": [
{
"LastModified": "2020-09-09T16:50:36.888Z",
"ETag": "2020-09-09T16:50:36.888Z",
"StorageClass": "STANDARD",
"Key": "dir1/",
"Size": 0
},
{
"LastModified": "2020-09-09T16:50:36.981Z",
"ETag": "2020-09-09T16:50:36.981Z",
"StorageClass": "STANDARD",
"Key": "dir1/dir2",
"Size": 0
}
]
}

As explained in the Jira description, when *mkdir -p* is run on S3A it created 
0-byte file, so it will not append "/" and ozone will not consider it is a 
directory.



hdfs dfs -put /etc/hadoop/conf/ozone-site.xml s3a://sample/dir1/dir2/file1

Fails with the below error, as it considers /dir1/dir2 as file not directory.


{code:java}
4:54:16.945 PM  ERROR   ObjectEndpoint  
Exception occurred in PutObject
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: dir1/dir2/file1._COPYING_ as there is already file in the given path
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:584)
at 

[jira] [Comment Edited] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193033#comment-17193033
 ] 

Bharat Viswanadham edited comment on HDDS-4209 at 9/9/20, 4:56 PM:
---

Hi [~elek]
For your test have you enabled this config *ozone.om.enable.filesystem.paths*.
And how you have created the directory. (If using ofs/o3fs, it will work fine, 
as mkdir for ofs is createDirectory not put with 0 byte file)


[root@bvoz-1 ~]# oapi create-bucket --bucket sample
{
"Location": "https://bvoz-1.bvoz.root.hwx.site:9879/sample;
}

[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://sample/dir1/dir2


[root@bvoz-1 ~]# oapi list-objects --bucket sample
{
"Contents": [
{
"LastModified": "2020-09-09T16:50:36.888Z",
"ETag": "2020-09-09T16:50:36.888Z",
"StorageClass": "STANDARD",
"Key": "dir1/",
"Size": 0
},
{
"LastModified": "2020-09-09T16:50:36.981Z",
"ETag": "2020-09-09T16:50:36.981Z",
"StorageClass": "STANDARD",
"Key": "dir1/dir2",
"Size": 0
}
]
}

As explained in the Jira description, when *mkdir -p* is run on S3A it created 
0-byte file, so it will not append "/" and ozone will not consider it is a 
directory.



hdfs dfs -put /etc/hadoop/conf/ozone-site.xml s3a://sample/dir1/dir2/file1

Fails with the below error, as it considers /dir1/dir2 as file not directory.


{code:java}
4:54:16.945 PM  ERROR   ObjectEndpoint  
Exception occurred in PutObject
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: dir1/dir2/file1._COPYING_ as there is already file in the given path
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:584)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:688)
at 
org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:396)
at 
org.apache.hadoop.ozone.s3.endpoint.ObjectEndpoint.put(ObjectEndpoint.java:168)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
{code}






was (Author: bharatviswa):
Hi [~elek]
For your test have you enabled this config.
And how you have created the directory. (If using ofs/o3fs, it will work fine, 
as mkdir for ofs is createDirectory not put with 0 byte file)


[root@bvoz-1 ~]# oapi create-bucket --bucket sample
{
"Location": "https://bvoz-1.bvoz.root.hwx.site:9879/sample;
}

[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://sample/dir1/dir2


[root@bvoz-1 ~]# oapi list-objects --bucket sample
{
"Contents": [
{
"LastModified": "2020-09-09T16:50:36.888Z",
"ETag": "2020-09-09T16:50:36.888Z",
"StorageClass": "STANDARD",
"Key": "dir1/",
"Size": 0
},
{
"LastModified": "2020-09-09T16:50:36.981Z",
"ETag": "2020-09-09T16:50:36.981Z",
"StorageClass": "STANDARD",
"Key": "dir1/dir2",
"Size": 0
}
]
}

As explained in the Jira description, when *mkdir -p* is run on S3A it created 
0-byte file, so it will not append "/" and ozone will not consider it is a 
directory.



hdfs dfs -put /etc/hadoop/conf/ozone-site.xml s3a://sample/dir1/dir2/file1

Fails with the below error, as it considers /dir1/dir2 as file not directory.


{code:java}
4:54:16.945 PM  ERROR   ObjectEndpoint  
Exception occurred in PutObject
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: dir1/dir2/file1._COPYING_ as there is already file in the given path
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:584)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:688)
at 

[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-09 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17193033#comment-17193033
 ] 

Bharat Viswanadham commented on HDDS-4209:
--

Hi [~elek]
For your test have you enabled this config.
And how you have created the directory. (If using ofs/o3fs, it will work fine, 
as mkdir for ofs is createDirectory not put with 0 byte file)


[root@bvoz-1 ~]# oapi create-bucket --bucket sample
{
"Location": "https://bvoz-1.bvoz.root.hwx.site:9879/sample;
}

[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://sample/dir1/dir2


[root@bvoz-1 ~]# oapi list-objects --bucket sample
{
"Contents": [
{
"LastModified": "2020-09-09T16:50:36.888Z",
"ETag": "2020-09-09T16:50:36.888Z",
"StorageClass": "STANDARD",
"Key": "dir1/",
"Size": 0
},
{
"LastModified": "2020-09-09T16:50:36.981Z",
"ETag": "2020-09-09T16:50:36.981Z",
"StorageClass": "STANDARD",
"Key": "dir1/dir2",
"Size": 0
}
]
}

As explained in the Jira description, when *mkdir -p* is run on S3A it created 
0-byte file, so it will not append "/" and ozone will not consider it is a 
directory.



hdfs dfs -put /etc/hadoop/conf/ozone-site.xml s3a://sample/dir1/dir2/file1

Fails with the below error, as it considers /dir1/dir2 as file not directory.


{code:java}
4:54:16.945 PM  ERROR   ObjectEndpoint  
Exception occurred in PutObject
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: dir1/dir2/file1._COPYING_ as there is already file in the given path
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:593)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.openKey(OzoneManagerProtocolClientSideTranslatorPB.java:584)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createKey(RpcClient.java:688)
at 
org.apache.hadoop.ozone.client.OzoneBucket.createKey(OzoneBucket.java:396)
at 
org.apache.hadoop.ozone.s3.endpoint.ObjectEndpoint.put(ObjectEndpoint.java:168)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
at 
org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
{code}





> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is 

[jira] [Comment Edited] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-08 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192296#comment-17192296
 ] 

Bharat Viswanadham edited comment on HDDS-4209 at 9/8/20, 4:09 PM:
---

[~elek]
I am not sure if this is the corner case.
Because the use case here is straight forward
/d11/d12 -> create a directory, but for ozone, it is considered as a file. (As 
create directory is put Request for a file with 0 bytes from S3A, it does not 
call createDirectory on OM)
"/d11/d12/file1 -> So, when trying to create file1 it considers "/d11/d12" as 
file instead of directory that is causing this error.

{quote}I think the intermediate directory creation should be more 
permissive.{quote}
What do you mean by this? 



was (Author: bharatviswa):
[~elek]
I am not sure if this is the corner case.
Because the use case here is straight forward
/d11/d12 -> create a directory, but for ozone, it is considered as a file. (As 
create directory is put Request for a file with 0 bytes from S3A, it does not 
call createDirectory)
"/d11/d12/file1 -> So, when trying to create file1 it considers "/d11/d12" as 
file instead of directory that is causing this error.

{quote}I think the intermediate directory creation should be more 
permissive.{quote}
What do you mean by this? 


> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-08 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192296#comment-17192296
 ] 

Bharat Viswanadham commented on HDDS-4209:
--

[~elek]
I am not sure if this is the corner case.
Because the use case here is straight forward
/d11/d12 -> create a directory, but for ozone, it is considered as a file. (As 
create directory is put Request for a file with 0 bytes)
"/d11/d12/file1 -> So, when trying to create file1 it considers "/d11/d12" as 
file instead of directory that is causing this error.

{quote}I think the intermediate directory creation should be more 
permissive.{quote}
What do you mean by this? 


> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-08 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192296#comment-17192296
 ] 

Bharat Viswanadham edited comment on HDDS-4209 at 9/8/20, 4:08 PM:
---

[~elek]
I am not sure if this is the corner case.
Because the use case here is straight forward
/d11/d12 -> create a directory, but for ozone, it is considered as a file. (As 
create directory is put Request for a file with 0 bytes from S3A, it does not 
call createDirectory)
"/d11/d12/file1 -> So, when trying to create file1 it considers "/d11/d12" as 
file instead of directory that is causing this error.

{quote}I think the intermediate directory creation should be more 
permissive.{quote}
What do you mean by this? 



was (Author: bharatviswa):
[~elek]
I am not sure if this is the corner case.
Because the use case here is straight forward
/d11/d12 -> create a directory, but for ozone, it is considered as a file. (As 
create directory is put Request for a file with 0 bytes)
"/d11/d12/file1 -> So, when trying to create file1 it considers "/d11/d12" as 
file instead of directory that is causing this error.

{quote}I think the intermediate directory creation should be more 
permissive.{quote}
What do you mean by this? 


> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Target Version/s: 1.1.0

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> For this same reason, we added the required information to check ACLs into 
> OMRequest.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Description: 
In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
using checkAcls. But it will have not any RpcContext and it will fail with NPE 
in checkAcls when getting hostName.

For this same reason, we added the required information to check ACLs into 
OMRequest.


{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}


  was:
In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
using checkAcls. But it will have not any RpcContext and it will fail with NPE 
in checkAcls when getting hostName.


{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> For this same reason, we added the required information to check ACLs into 
> OMRequest.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> 

[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Labels:   (was: OzoneManager)

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Labels: OzoneManager  (was: )

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: OzoneManager
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Component/s: Ozone Manager

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Priority: Blocker  (was: Major)

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4210:


 Summary: ResolveBucket during checkAcls fails
 Key: HDDS-4210
 URL: https://issues.apache.org/jira/browse/HDDS-4210
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
using checkAcls. But it will have not any RpcContext and it will fail with NPE 
in checkAcls when getting hostName.


{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
at 
org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
at 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4210) ResolveBucket during checkAcls fails

2020-09-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4210:
-
Affects Version/s: 1.0.0

> ResolveBucket during checkAcls fails
> 
>
> Key: HDDS-4210
> URL: https://issues.apache.org/jira/browse/HDDS-4210
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> In HA, in validateAndUpdateCache when resolveBucket, it checks the permission 
> using checkAcls. But it will have not any RpcContext and it will fail with 
> NPE in checkAcls when getting hostName.
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.checkAcls(OzoneManager.java:1604)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3497)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3465)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.resolveBucketLink(OzoneManager.java:3452)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyRequest.resolveBucketLink(OMKeyRequest.java:96)
>   at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:215)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>   at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-04 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190803#comment-17190803
 ] 

Bharat Viswanadham edited comment on HDDS-4155 at 9/4/20, 4:05 PM:
---

{quote}I am wondering what is the problem with it. It seems that Hadoop 
Compatible File System can handle this and display both.
{quote}
Not tested my self, I think here to say S3AFileSystem supports it not every 
Hadoop Compatible FileSystem supports it(Example below). My guess is as S3A 
does not have control over AWS S3 implementation, might be they cannot avoid 
this. But in the Ozone case, during KeyCommit we can catch this, and block it. 
So, the question is do we want to support what kind of semantics in Ozone?

 
{quote}It seems to be very strange for me as this is something which should be 
avoid with posix fs, but if hadoop can support it...
{quote}
An Example, HDFS disallows this, it does not allow file and a directory with 
the same name in a path.

 

 

 


was (Author: bharatviswa):
{quote}I am wondering what is the problem with it. It seems that Hadoop 
Compatible File System can handle this and display both.
{quote}
Not tested my self, I think here to say S3AFileSystem supports it not every 
Hadoop Compatible FileSystem supports it(Example below). My guess is as S3A 
does not have control over AWS S3 implementation, might be they cannot avoid 
this. But in the Ozone case, during KeyCommit we can catch this, and block it. 
So, the question is do we want to support such semantics in Ozone?

 
{quote}It seems to be very strange for me as this is something which should be 
avoid with posix fs, but if hadoop can support it...
{quote}
An Example, HDFS disallows this, it does not allow file and a directory with 
the same name in a path.

 

 

 

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-04 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190803#comment-17190803
 ] 

Bharat Viswanadham edited comment on HDDS-4155 at 9/4/20, 4:04 PM:
---

{quote}I am wondering what is the problem with it. It seems that Hadoop 
Compatible File System can handle this and display both.
{quote}
Not tested my self, I think here to say S3AFileSystem supports it not every 
Hadoop Compatible FileSystem supports it(Example below). My guess is as S3A 
does not have control over AWS S3 implementation, might be they cannot avoid 
this. But in the Ozone case, during KeyCommit we can catch this, and block it. 
So, the question is do we want to support such semantics in Ozone?

 
{quote}It seems to be very strange for me as this is something which should be 
avoid with posix fs, but if hadoop can support it...
{quote}
An Example, HDFS disallows this, it does not allow file and a directory with 
the same name in a path.

 

 

 


was (Author: bharatviswa):
{quote}I am wondering what is the problem with it. It seems that Hadoop 
Compatible File System can handle this and display both.
{quote}
Not tested my self, I think here to say S3AFileSystem supports it not every 
Hadoop Compatible FileSystem supports it(Example below). My guess is as S3A 
does not have control over AWS S3 implementation, they cannot avoid this. But 
in the Ozone case, during KeyCommit we can catch this, and block it. So, the 
question is do we want to support such semantics in Ozone?

 
{quote}It seems to be very strange for me as this is something which should be 
avoid with posix fs, but if hadoop can support it...
{quote}
An Example, HDFS disallows this, it does not allow file and a directory with 
the same name in a path.

 

 

 

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-04 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190803#comment-17190803
 ] 

Bharat Viswanadham commented on HDDS-4155:
--

{quote}I am wondering what is the problem with it. It seems that Hadoop 
Compatible File System can handle this and display both.
{quote}
Not tested my self, I think here to say S3AFileSystem supports it not every 
Hadoop Compatible FileSystem supports it(Example below). My guess is as S3A 
does not have control over AWS S3 implementation, they cannot avoid this. But 
in the Ozone case, during KeyCommit we can catch this, and block it. So, the 
question is do we want to support such semantics in Ozone?

 
{quote}It seems to be very strange for me as this is something which should be 
avoid with posix fs, but if hadoop can support it...
{quote}
An Example, HDFS disallows this, it does not allow file and a directory with 
the same name in a path.

 

 

 

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3077) Lost volume after changing access key

2020-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3077.
--
Resolution: Invalid

Now, this is not an issue, as we have removed the logic of to figure out the 
volume from credentials, and using fixed volume "s3v".

This behavior is changed by HDDS-3385.

> Lost volume after changing access key
> -
>
> Key: HDDS-3077
> URL: https://issues.apache.org/jira/browse/HDDS-3077
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Beata Sudi
>Priority: Critical
>
> When using the S3 API, Ozone generates the volume depending on the user's 
> access key.  When the access key is changed, it becomes no longer reachable 
> by the user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-03 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190544#comment-17190544
 ] 

Bharat Viswanadham commented on HDDS-4155:
--

Yes, currently it can happen. This is because we don't check file/directory 
exists during key commit.

This PR tries to fix the problem. Pls, help in the review if you have time 
slots.

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-03 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190541#comment-17190541
 ] 

Bharat Viswanadham edited comment on HDDS-4097 at 9/4/20, 5:00 AM:
---

One more interesting thing 

HDDS-4209 S3A filesystem does not work with Ozone due to the creation of 
intermediate directories, only works when ozone.om.enable.filesystem.paths 
disabled. For more info on the reason behind this refer to the Jira.


was (Author: bharatviswa):
One more interesting thing 

HDDS-4029 S3A filesystem does not work with Ozone due to the creation of 
intermediate directories, only works when ozone.om.enable.filesystem.paths 
disabled. For more info on the reason behind this refer to the Jira.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4209:
-
Priority: Blocker  (was: Major)

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-03 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190541#comment-17190541
 ] 

Bharat Viswanadham commented on HDDS-4097:
--

One more interesting thing 

HDDS-4029 S3A filesystem does not work with Ozone due to the creation of 
intermediate directories, only works when ozone.om.enable.filesystem.paths 
disabled. For more info on the reason behind this refer to the Jira.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-03 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190540#comment-17190540
 ] 

Bharat Viswanadham commented on HDDS-4209:
--

>From this looks like, we need to disable the flag when bucket needs to work 
>with S3AFileSystem currently.

 

cc [~arpaga] [~elek]

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-4209:


Assignee: Bharat Viswanadham

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-03 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4209:
-
Parent: HDDS-4097
Issue Type: Sub-task  (was: Bug)

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4209:


 Summary: S3A Filesystem does not work with Ozone S3
 Key: HDDS-4209
 URL: https://issues.apache.org/jira/browse/HDDS-4209
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


When *ozone.om.enable.filesystem.paths* is enabled

 

hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success

hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error

 
{code:java}
2020-09-04 03:53:51,377 ERROR 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation failed. 
Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
file: cp/k1._COPYING_ as there is already file in the given path
 at 
org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
 at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
 at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
 at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748){code}
*Reason for this*
 S3A filesystem when create directory creates an empty file

*Now entries in Ozone KeyTable after create directory*
 d11/
 d11/d12

Because of this in OMFileRequest.VerifyInFilesPath fails with 
FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
directory. (As in ozone currently, directories end with trailing "/")

So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
considered as file and fails with NOT_A_FILE

When disabled it works fine, as when disabled during key create we do not check 
any filesystem semantics and also does not create intermediate directories.
{code:java}
[root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
[root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
s3a://b12345/d11/d12/k1
[root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
Found 1 items
-rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
s3a://b12345/d11/d12/k1
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4121) Implement OmMetadataMangerImpl#getExpiredOpenKeys

2020-08-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4121:
-
Component/s: (was: OM HA)
 Ozone Manager

> Implement OmMetadataMangerImpl#getExpiredOpenKeys
> -
>
> Key: HDDS-4121
> URL: https://issues.apache.org/jira/browse/HDDS-4121
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Implement the getExpiredOpenKeys method in OmMetadataMangerImpl to return 
> keys in the open key table that are older than a configurable time interval. 
> The method will be modified to take a parameter limiting how many keys are 
> returned. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4121) Implement OmMetadataMangerImpl#getExpiredOpenKeys

2020-08-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4121.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Implement OmMetadataMangerImpl#getExpiredOpenKeys
> -
>
> Key: HDDS-4121
> URL: https://issues.apache.org/jira/browse/HDDS-4121
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Implement the getExpiredOpenKeys method in OmMetadataMangerImpl to return 
> keys in the open key table that are older than a configurable time interval. 
> The method will be modified to take a parameter limiting how many keys are 
> returned. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4121) Implement OmMetadataMangerImpl#getExpiredOpenKeys

2020-08-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4121:
-
Affects Version/s: (was: 0.6.0)
   1.0.0

> Implement OmMetadataMangerImpl#getExpiredOpenKeys
> -
>
> Key: HDDS-4121
> URL: https://issues.apache.org/jira/browse/HDDS-4121
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Implement the getExpiredOpenKeys method in OmMetadataMangerImpl to return 
> keys in the open key table that are older than a configurable time interval. 
> The method will be modified to take a parameter limiting how many keys are 
> returned. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4121) Implement OmMetadataMangerImpl#getExpiredOpenKeys

2020-08-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4121:
-
Priority: Major  (was: Minor)

> Implement OmMetadataMangerImpl#getExpiredOpenKeys
> -
>
> Key: HDDS-4121
> URL: https://issues.apache.org/jira/browse/HDDS-4121
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Implement the getExpiredOpenKeys method in OmMetadataMangerImpl to return 
> keys in the open key table that are older than a configurable time interval. 
> The method will be modified to take a parameter limiting how many keys are 
> returned. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4121) Implement OmMetadataMangerImpl#getExpiredOpenKeys

2020-08-31 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4121:
-
Affects Version/s: 0.6.0

> Implement OmMetadataMangerImpl#getExpiredOpenKeys
> -
>
> Key: HDDS-4121
> URL: https://issues.apache.org/jira/browse/HDDS-4121
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Affects Versions: 0.6.0
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Implement the getExpiredOpenKeys method in OmMetadataMangerImpl to return 
> keys in the open key table that are older than a configurable time interval. 
> The method will be modified to take a parameter limiting how many keys are 
> returned. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-31 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17187869#comment-17187869
 ] 

Bharat Viswanadham commented on HDDS-4097:
--

Today this topic has been discussed in the community sync.

*Posting the discussion summary here:*
 # Even when the flag is false, intermediate directory creation should be 
supported, without any check for file system semantics. The reason for this is 
currently ingestion with S3 and using it through FS is completely broken. This 
might cause some issues in FS, as we don't check any fs semantics and create 
keys, but this provides 100% AWS S3 semantics.
 # If someone finds issue and they want stricter fs semantics with slight aws 
s3 semantic variation the ozone.om.enable.filesystem.paths config needs to be 
turned on. In this way, S3 semantics are slightly changed but 100% FS semantics 
are provided.

 

 

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4151) Skip the inputstream while offset larger than zero in s3g

2020-08-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4151.
--
Fix Version/s: (was: 0.6.0)
   1.1.0
   Resolution: Fixed

> Skip the inputstream while offset larger than zero in s3g
> -
>
> Key: HDDS-4151
> URL: https://issues.apache.org/jira/browse/HDDS-4151
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4146) Show the ScmId and ClusterId in the scm web ui.

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4146:
-
Fix Version/s: (was: 1.1)
   1.1.0

> Show the ScmId and ClusterId in the scm web ui.
> ---
>
> Key: HDDS-4146
> URL: https://issues.apache.org/jira/browse/HDDS-4146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4145) Bump version to 1.1.0-SNAPSHOT on master

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4145.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Bump version to 1.1.0-SNAPSHOT on master
> 
>
> Key: HDDS-4145
> URL: https://issues.apache.org/jira/browse/HDDS-4145
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> s/0.6.0-SNAPSHOT/1.1.0-SNAPSHOT/g



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4145) Bump version to 1.1.0-SNAPSHOT on master

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4145:
-
Target Version/s: 1.1.0  (was: 0.7.0)

> Bump version to 1.1.0-SNAPSHOT on master
> 
>
> Key: HDDS-4145
> URL: https://issues.apache.org/jira/browse/HDDS-4145
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> s/0.6.0-SNAPSHOT/1.1.0-SNAPSHOT/g



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4163) Enable filesystem semantics on bucket during creation of bucket

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4163:
-
Summary: Enable filesystem semantics on bucket during creation of bucket  
(was: Enable filesystem semantics on bucket during creation)

> Enable filesystem semantics on bucket during creation of bucket
> ---
>
> Key: HDDS-4163
> URL: https://issues.apache.org/jira/browse/HDDS-4163
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-3955 added a config to ozoneManager ozone.om.enable.filesystem.paths to 
> enable data ingested via S3, can be used from OFS/O3FS.
>  
> But this has changed Semantics of AWS S3 behavior, and now paths are 
> considered as Unix paths. And also intermediate directories are created. This 
> has changed the behavior of the entire cluster.
> This Jira is to provide an option for the user to create buckets that adhere 
> to filesystem semantics at the bucket level. In this way, in a single 
> cluster, there can be buckets that adhere to 100% Aws S3 semantics, and when 
> this is used from FS, they will not work, which is current behavior no change 
> in that part. When user decides to use the bucket from FS, so during creation 
> enable the filesystem semantics on the bucket, so that bucket can be used 
> from S3, and paths are considered as Unix paths and follow fs semantics. 
> (This is like not 100% AWS compatibility on bucket)
>  
> Note: This option needs to be used during bucket creation time, cannot be 
> changed later as this can cause weird issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4163) Enable filesystem semantics on bucket during creation

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4163:
-
Description: 
HDDS-3955 added a config to ozoneManager ozone.om.enable.filesystem.paths to 
enable data ingested via S3, can be used from OFS/O3FS.

 

But this has changed Semantics of AWS S3 behavior, and now paths are considered 
as Unix paths. And also intermediate directories are created. This has changed 
the behavior of the entire cluster.

This Jira is to provide an option for the user to create buckets that adhere to 
filesystem semantics at the bucket level. In this way, in a single cluster, 
there can be buckets that adhere to 100% Aws S3 semantics, and when this is 
used from FS, they will not work, which is current behavior no change in that 
part. When user decides to use the bucket from FS, so during creation enable 
the filesystem semantics on the bucket, so that bucket can be used from S3, and 
paths are considered as Unix paths and follow fs semantics. (This is like not 
100% AWS compatibility on bucket)

 

Note: This option needs to be used during bucket creation time, cannot be 
changed later as this can cause weird issues.

 

  was:
HDDS-3955 added a config to ozoneManager ozone.om.enable.filesystem.paths to 
enable data ingested via S3, can be used from OFS/O3FS.

 

But this has changed Semantics of AWS S3 behavior, and now paths are considered 
as Unix paths. And also intermediate directories are created.

 

This has changed the behavior of the entire cluster.

This Jira is to provide an option for the user to create buckets which adhere 
to filesystem semantics at the bucket level. In this way, in a single cluster, 
there can be buckets that adhere to 100% Aws S3 semantics, and when this is 
used from FS, they will not work, which is current behavior no change in that 
part. When user decides to use the bucket from FS, so during creation enable 
the filesystem semantics on the bucket, so that bucket can be used from S3, and 
paths are considered as Unix paths and follow fs semantics. (This is like not 
100% AWS compatibility on bucket)

 

Note: This option needs to be used during bucket creation time, cannot be 
changed later as this can cause weird issues.

 


> Enable filesystem semantics on bucket during creation
> -
>
> Key: HDDS-4163
> URL: https://issues.apache.org/jira/browse/HDDS-4163
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> HDDS-3955 added a config to ozoneManager ozone.om.enable.filesystem.paths to 
> enable data ingested via S3, can be used from OFS/O3FS.
>  
> But this has changed Semantics of AWS S3 behavior, and now paths are 
> considered as Unix paths. And also intermediate directories are created. This 
> has changed the behavior of the entire cluster.
> This Jira is to provide an option for the user to create buckets that adhere 
> to filesystem semantics at the bucket level. In this way, in a single 
> cluster, there can be buckets that adhere to 100% Aws S3 semantics, and when 
> this is used from FS, they will not work, which is current behavior no change 
> in that part. When user decides to use the bucket from FS, so during creation 
> enable the filesystem semantics on the bucket, so that bucket can be used 
> from S3, and paths are considered as Unix paths and follow fs semantics. 
> (This is like not 100% AWS compatibility on bucket)
>  
> Note: This option needs to be used during bucket creation time, cannot be 
> changed later as this can cause weird issues.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4163) Enable filesystem semantics on bucket during creation

2020-08-27 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4163:


 Summary: Enable filesystem semantics on bucket during creation
 Key: HDDS-4163
 URL: https://issues.apache.org/jira/browse/HDDS-4163
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


HDDS-3955 added a config to ozoneManager ozone.om.enable.filesystem.paths to 
enable data ingested via S3, can be used from OFS/O3FS.

 

But this has changed Semantics of AWS S3 behavior, and now paths are considered 
as Unix paths. And also intermediate directories are created.

 

This has changed the behavior of the entire cluster.

This Jira is to provide an option for the user to create buckets which adhere 
to filesystem semantics at the bucket level. In this way, in a single cluster, 
there can be buckets that adhere to 100% Aws S3 semantics, and when this is 
used from FS, they will not work, which is current behavior no change in that 
part. When user decides to use the bucket from FS, so during creation enable 
the filesystem semantics on the bucket, so that bucket can be used from S3, and 
paths are considered as Unix paths and follow fs semantics. (This is like not 
100% AWS compatibility on bucket)

 

Note: This option needs to be used during bucket creation time, cannot be 
changed later as this can cause weird issues.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4155) Directory and filename can end up with same name in a path

2020-08-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4155:
-
Status: Patch Available  (was: Open)

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   >