[jira] [Updated] (HDDS-4405) Proxy failover is logging with out trying all OMS

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4405:
-
Labels: pull-request-available  (was: )

> Proxy failover is logging with out trying all OMS
> -
>
> Key: HDDS-4405
> URL: https://issues.apache.org/jira/browse/HDDS-4405
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: umamaheswararao
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> [root@uma-1 ~]# sudo -u hdfs hdfs dfs -ls o3fs://bucket.volume.ozone1/
> 20/10/28 23:37:50 INFO retry.RetryInvocationHandler: 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3.
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:198)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:123)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:73)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:113)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
> , while invoking $Proxy10.submitRequest over 
> {om1=nodeId=om1,nodeAddress=uma-1.uma.root.hwx.site:9862, 
> om3=nodeId=om3,nodeAddress=uma-3.uma.root.hwx.site:9862, 
> om2=nodeId=om2,nodeAddress=uma-2.uma.root.hwx.site:9862} after 1 failover 
> attempts. Trying to failover immediately.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4363) Add metric to track the number of RocksDB open/close operations

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4363:
-
Labels: pull-request-available  (was: )

> Add metric to track the number of RocksDB open/close operations
> ---
>
> Key: HDDS-4363
> URL: https://issues.apache.org/jira/browse/HDDS-4363
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 1.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>
> We are benchmarking Ozone performance, and realized RocksDB open/close 
> operations have huge impact to performance. Each db open takes about 70ms on 
> average and close takes about 1ms on average.
>  
> Having metrics on these operations will help understand DataNode performance 
> problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4401) Fix compilation issue in HDDS-3698-upgrade branch.

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4401:
-
Labels: pull-request-available  (was: )

> Fix compilation issue in HDDS-3698-upgrade branch.
> --
>
> Key: HDDS-4401
> URL: https://issues.apache.org/jira/browse/HDDS-4401
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> {code}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-ozone-ozone-manager: Compilation failure: Compilation 
> failure: 
> Error:  
> /mnt/ozone/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java:[691,6]
>  not a statement
> Error:  
> /mnt/ozone/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java:[691,23]
>  ';' expected
> Error:  -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4357) Rename : make rename an atomic ops by updating key path entry in dir/file table

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4357:
-
Labels: pull-request-available  (was: )

> Rename : make rename an atomic ops by updating key path entry in dir/file 
> table
> ---
>
> Key: HDDS-4357
> URL: https://issues.apache.org/jira/browse/HDDS-4357
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
>
> This task is to handle rename key path request and make it an atomic 
> operation by updating the DirTable or FileTable.
> Here in this jira, we consider only new Ozone FS client talking to new OM 
> server. Later, I will raise separate Jira task to handle compatibilities 
> across different client/server versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4400) Make raft log directory deletion configurable during pipeline remove

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4400:
-
Labels: pull-request-available  (was: )

> Make raft log directory deletion configurable during pipeline remove
> 
>
> Key: HDDS-4400
> URL: https://issues.apache.org/jira/browse/HDDS-4400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> The idea here is to add a config to make raft log directory removal 
> configurable during pipeline remove.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4399) Safe mode rule for piplelines should only consider open pipelines

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4399:
-
Labels: pull-request-available  (was: )

> Safe mode rule for piplelines should only consider open pipelines
> -
>
> Key: HDDS-4399
> URL: https://issues.apache.org/jira/browse/HDDS-4399
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently, for safe mode we consider all pipelines existing in DB for safe 
> mode exit criteria. It ma happen that, SCM has the pipelines craeted , but 
> none of the participants datanodes ever created these datanodes. In such 
> cases, SCM fails to come out of safemode as these pipelines are never 
> reported back to SCM.
>  
> The idea here is to consider pipelines which are marked open during SCM 
> startup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3959) Avoid HddsProtos.PipelineID#toString

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3959:
-
Labels: pull-request-available  (was: )

> Avoid HddsProtos.PipelineID#toString
> 
>
> Key: HDDS-3959
> URL: https://issues.apache.org/jira/browse/HDDS-3959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> {{PipelineID}} was recently changed to have integer-based ID in addition to 
> the string ID.  Now log messages including {{PipelineID}} span multiple lines:
> {code:title=https://github.com/elek/ozone-build-results/blob/92d31c9b58065b37a371c71c97b346f99163318d/2020/07/11/1626/acceptance/docker-ozone-ozone-freon-scm.log#L218-L223}
> datanode_1  | 2020-07-11 13:07:00,540 [Command processor thread] INFO 
> commandhandler.CreatePipelineCommandHandler: Created Pipeline RATIS ONE #id: 
> "8101dcbf-1a28-4f20-863a-0616b4e4bc4b"
> datanode_1  | uuid128 {
> datanode_1  |   mostSigBits: -9150790254504423648
> datanode_1  |   leastSigBits: -8774694229384053685
> datanode_1  | }
> datanode_1  | .
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4258) Set GDPR to a Security submenu in EN and CN document.

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4258:
-
Labels: newbie pull-request-available  (was: newbie)

> Set GDPR to a Security submenu in EN and CN document.
> -
>
> Key: HDDS-4258
> URL: https://issues.apache.org/jira/browse/HDDS-4258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Zheng Huang-Mu
>Assignee: François Risch
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> Base on [~xyao] comment on HDDS-4156.
> https://github.com/apache/hadoop-ozone/pull/1368#issuecomment-694532324
> Set GDPR to a Security submenu in EN and CN document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4320) Let Ozone input streams implement CanUnbuffer

2020-10-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4320:
-
Labels: pull-request-available  (was: )

> Let Ozone input streams implement CanUnbuffer
> -
>
> Key: HDDS-4320
> URL: https://issues.apache.org/jira/browse/HDDS-4320
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Implement Hadoop's {{CanUnbuffer}} interface in {{OzoneFSInputStream}} and 
> the underlying other input streams.  Note: {{CanUnbuffer}} is available in 
> 2.7 (HDFS-7694), but {{StreamCapabilities#UNBUFFER}} is new to 2.9.1 
> (HADOOP-15012).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4362) Change hadoop32 test to use 3.2 image

2020-10-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4362:
-
Labels: pull-request-available  (was: )

> Change hadoop32 test to use 3.2 image
> -
>
> Key: HDDS-4362
> URL: https://issues.apache.org/jira/browse/HDDS-4362
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> {{ozone-mr/hadoop32}} acceptance test currently uses "latest" {{hadoop:3}} 
> docker image, which is currently Hadoop 3.2.  If it gets updated to Hadoop 
> 3.3, Ozone acceptance test will be broken.  We should explicitly use some 3.2 
> release-based image.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4388) Make writeStateMachineTimeout retry count proportional to node failure timeout

2020-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4388:
-
Labels: pull-request-available  (was: )

> Make writeStateMachineTimeout retry count proportional to node failure timeout
> --
>
> Key: HDDS-4388
> URL: https://issues.apache.org/jira/browse/HDDS-4388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently, in ratis "writeStateMachinecall" gets retried indefinitely in 
> event of a timeout. In case, where disks are slow/overloaded or number of 
> chunk writer threads are not available for a period of 10s, writeStateMachine 
> call times out in 10s. In cases like these, the same write chunk keeps on 
> getting retried causing the same chink of data to be overwritten. The idea 
> here is to abort the request once the node failure timeout reaches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4386) Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon

2020-10-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4386:
-
Labels: pull-request-available  (was: )

> Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon
> -
>
> Key: HDDS-4386
> URL: https://issues.apache.org/jira/browse/HDDS-4386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Blocker
>  Labels: pull-request-available
>
> In Tencent production environment, after start Recon for a while, we got 
> warnings that all DNs become stale/dead at SCM side. After kill recon, all 
> DNs become healthy in a very short time.
>  
> *The root cause is:*
> 1) EndpointStateMachine for SCM and that for Recon share the thread pool 
> created by DatanodeStateMachine, which is a fixed size thread pool:
> {code:java}
> executorService = Executors.newFixedThreadPool(
> getEndPointTaskThreadPoolSize(),
> new ThreadFactoryBuilder()
> .setNameFormat("Datanode State Machine Task Thread - %d").build());
> private int getEndPointTaskThreadPoolSize() {
>   // TODO(runzhiwang): current only support one recon, if support multiple
>   //  recon in future reconServerCount should be the real number of recon
>   int reconServerCount = 1;
>   int totalServerCount = reconServerCount;
>   try {
> totalServerCount += HddsUtils.getSCMAddresses(conf).size();
>   } catch (Exception e) {
> LOG.error("Fail to get scm addresses", e);
>   }
>   return totalServerCount;
> }
> {code}
> meanwhile, current Recon has some performance issue, after running for hours, 
> it became slower and slower, and crashed due to OOM. 
> 2) The communication between DN and Recon will soon exhaust all the threads 
> in DatanodeStateMachine.executorService, there will be no available threads 
> for DN to talk SCM. 
> 3) all DNs become stale/dead at SCM side.
>  
> *The fix is quite straightforward:*
> Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon, a 
> slow Recon won't interfere the communication between DN and SCM, or vice 
> versa.
>  
> *P.S.*
> The first edition for DatanodeStateMachine.executorService is a cached thread 
> pool, if there exists a slow SCM/Recon, more and more threads will be 
> created, and DN will OOM eventually, due to tens of thousands of threads are 
> created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4375) OM changes the block length when receives truncate request

2020-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4375:
-
Labels: pull-request-available  (was: )

> OM changes the block length when receives truncate request
> --
>
> Key: HDDS-4375
> URL: https://issues.apache.org/jira/browse/HDDS-4375
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3731) add storage space quota doc

2020-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3731:
-
Labels: pull-request-available  (was: )

> add storage space quota doc
> ---
>
> Key: HDDS-3731
> URL: https://issues.apache.org/jira/browse/HDDS-3731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4373) Add append design documentation

2020-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4373:
-
Labels: pull-request-available  (was: )

> Add append design documentation
> ---
>
> Key: HDDS-4373
> URL: https://issues.apache.org/jira/browse/HDDS-4373
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4366) SCM deletion service should delete configured number of blocks every interval

2020-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4366:
-
Labels: pull-request-available  (was: )

> SCM deletion service should delete configured number of blocks every interval
> -
>
> Key: HDDS-4366
> URL: https://issues.apache.org/jira/browse/HDDS-4366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>
> SCM service currently uses datanode's configuration to determine the number 
> of blocks to delete every interval. It should have its own congifuration for 
> maximum number of blocks to delete in every interval.
> Further it currently scans the entire DB to fetch block deletion 
> transactions. This can be avoided with this approach. With this patch service 
> would always fetch configured number of blocks from the db.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4365:
-
Labels: pull-request-available  (was: )

> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Minor
>  Labels: pull-request-available
>
> in SCMBlockLocationFailoverProxyProvider,
> currently it is
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  
> FYi, according to non-HA version
> {code:java}
> private static ScmBlockLocationProtocol getScmBlockClient(
> OzoneConfiguration conf) throws IOException {
>   RPC.setProtocolEngine(conf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   long scmVersion =
>   RPC.getProtocolVersion(ScmBlockLocationProtocolPB.class);
>   InetSocketAddress scmBlockAddress =
>   getScmAddressForBlockClients(conf);
>   ScmBlockLocationProtocolClientSideTranslatorPB scmBlockLocationClient =
>   new ScmBlockLocationProtocolClientSideTranslatorPB(
>   RPC.getProxy(ScmBlockLocationProtocolPB.class, scmVersion,
>   scmBlockAddress, UserGroupInformation.getCurrentUser(), conf,
>   NetUtils.getDefaultSocketFactory(conf),
>   Client.getRpcTimeout(conf)));
>   return TracingUtil
>   .createProxy(scmBlockLocationClient, ScmBlockLocationProtocol.class,
>   conf);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4123) Integrate OM Open Key Cleanup Service Into Existing Code

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4123:
-
Labels: pull-request-available  (was: )

> Integrate OM Open Key Cleanup Service Into Existing Code
> 
>
> Key: HDDS-4123
> URL: https://issues.apache.org/jira/browse/HDDS-4123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
>
> Implement the `OpenKeyCleanupService` class, and start and stop the service 
> in `KeyManagerImpl`. The following configurations will be added to specify 
> the service's behavior:
>  # ozone.open.key.cleanup.service.interval: Frequency the service should run.
>  # ozone.open.key.expire.threshold: Time from creation after which an open 
> key is deemed expired.
>  # ozone.open.key.cleanup.limit.per.task: Maximum number of keys the service 
> can mark for deletion on each run.
> Default values for these configurations will be chosen from HDFS data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4191) Add failover proxy for SCM container client

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4191:
-
Labels: pull-request-available  (was: )

> Add failover proxy for SCM container client
> ---
>
> Key: HDDS-4191
> URL: https://issues.apache.org/jira/browse/HDDS-4191
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>
> Take advantage of failover proxy in HDDS-3188 and have failover proxy for SCM 
> container client as well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4307) Start Background Service for Trash Deletion in Ozone Manager

2020-10-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4307:
-
Labels: pull-request-available  (was: )

> Start Background Service for Trash Deletion in Ozone Manager
> 
>
> Key: HDDS-4307
> URL: https://issues.apache.org/jira/browse/HDDS-4307
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4359) Expose VolumeIOStats in DN JMX

2020-10-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4359:
-
Labels: pull-request-available  (was: )

> Expose VolumeIOStats in DN JMX
> --
>
> Key: HDDS-4359
> URL: https://issues.apache.org/jira/browse/HDDS-4359
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Expose VolumeIOStats in DN JMX web endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4328) Provide fallback cache restore key

2020-10-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4328:
-
Labels: pull-request-available  (was: )

> Provide fallback cache restore key
> --
>
> Key: HDDS-4328
> URL: https://issues.apache.org/jira/browse/HDDS-4328
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Maven dependency cache hit or miss in GitHub Actions workflow is based on the 
> hash of all POM files.  If any POM is changed, all dependencies need to be 
> downloaded from scratch.  Providing {{restore-keys}} would allow it to fall 
> back to one of the previous caches, potentially avoiding most of the 
> downloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3714) Ozone support append truncate operation

2020-10-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3714:
-
Labels: pull-request-available  (was: )

> Ozone support append truncate operation
> ---
>
> Key: HDDS-3714
> URL: https://issues.apache.org/jira/browse/HDDS-3714
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Manager
>Reporter: maobaolong
>Assignee: runzhiwang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4332) ListFileStatus - do lookup in directory and file tables

2020-10-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4332:
-
Labels: pull-request-available  (was: )

> ListFileStatus - do lookup in directory and file tables
> ---
>
> Key: HDDS-4332
> URL: https://issues.apache.org/jira/browse/HDDS-4332
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
>
> This task is to perform look up of the user given {{key}} path in the 
> directory, file and openFile tables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4354) Avoid spamming of logs because of deleted transactions

2020-10-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4354:
-
Labels: pull-request-available  (was: )

> Avoid spamming of logs because of deleted transactions
> --
>
> Key: HDDS-4354
> URL: https://issues.apache.org/jira/browse/HDDS-4354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>
> The following two log entries are seen every regularly during 
> MiniOzoneChaosTests
> {code}
> 2020-10-15 17:21:25,467 [CommandWatcher-LeaseManager#LeaseMonitor] ERROR 
> lease.LeaseManager (LeaseManager.java:run(238)) - Execution was interrupted
> java.lang.InterruptedException: sleep interrupted
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.ozone.lease.LeaseManager$LeaseMonitor.run(LeaseManager.java:234)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> {code}
> 2020-10-15 17:21:58,487 [IPC Server handler 11 on default port 59106] WARN  
> block.DeletedBlockLogImpl (DeletedBlockLogImpl.java:incrementCount(135)) - 
> Deleted TXID 27 not found.
> 2020-10-15 17:21:58,488 [IPC Server handler 11 on default port 59106] WARN  
> block.DeletedBlockLogImpl (DeletedBlockLogImpl.java:incrementCount(135)) - 
> Deleted TXID 30 not found.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4323) Add integration tests for putting nodes into maintenance and fix any issues uncovered in the tests

2020-10-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4323:
-
Labels: pull-request-available  (was: )

> Add integration tests for putting nodes into maintenance and fix any issues 
> uncovered in the tests
> --
>
> Key: HDDS-4323
> URL: https://issues.apache.org/jira/browse/HDDS-4323
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> Add a series of intergration tests to prove nodes can enter and leave 
> maintenance correctly and address any issues in the code when adding the tests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4324) DatanodeAdminMonitor no longers needs maintenance end time to be passed

2020-10-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4324:
-
Labels: pull-request-available  (was: )

> DatanodeAdminMonitor no longers needs maintenance end time to be passed
> ---
>
> Key: HDDS-4324
> URL: https://issues.apache.org/jira/browse/HDDS-4324
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> An earlier change moved the maintenance endtime into the NodeStatus object. 
> However when adding a node to the decommission monitor the end time must 
> still be passed. This value is never used.
> This Jira will remove the endInHours field from the interface:
> {code}
> public interface DatanodeAdminMonitor extends Runnable {
>   void startMonitoring(DatanodeDetails dn, int endInHours);
>   void stopMonitoring(DatanodeDetails dn);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4339) Ozone S3 gateway throws NPE with goofys

2020-10-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4339:
-
Labels: pull-request-available  (was: )

> Ozone S3 gateway throws NPE with goofys
> ---
>
> Key: HDDS-4339
> URL: https://issues.apache.org/jira/browse/HDDS-4339
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Li Cheng
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: image-2020-10-13-15-23-49-864.png
>
>
> Configured goofys and s3g on different hosts and Fiotest writes files on the 
> goofys mount point. Export AWS secrets on the s3g host. See a bunch of NPE in 
> s3g logs.
>  # Looks like missing AWS auth header could cause NPE. Looks like 
> AWSSignatureProcessor.init() doesn't handle header missing which causes NPE.
>  # Why it's missing AWS auth header is also unknown.
> Note that there are files that have been successfully written into Ozone via 
> goofys, while not all of them are succeeded.  
>  
> 2020-10-13 11:18:43,425 [qtp1686100174-1238] ERROR 
> org.apache.hadoop.ozone.s3.OzoneClientProducer: Error: 
> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke public 
> void org.apache.hadoop.ozone.s3.AWSSignatureProcessor.init() throws 
> java.lang.Exception on 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor@5535155b
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:99)
>  at 
> org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:80)
>  at 
> org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
>  at 
> org.glassfish.jersey.ext.cdi1x.internal.CdiComponentProvider$InjectionManagerInjectedCdiTarget.postConstruct(CdiComponentProvider.java:887)
>  at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:162)
>  at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$CachingContextualInstanceStrategy.get(ContextualInstanceStrategy.java:177)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
>  at 
> org.jboss.weld.bean.proxy.ProxyMethodHandler.getInstance(ProxyMethodHandler.java:125)
>  at 
> org.apache.hadoop.ozone.s3.AWSSignatureProcessor$Proxy$_$$_WeldClientProxy.getAwsAccessId(Unknown
>  Source)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:79)
>  at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
>  at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
>  at 
> org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
>  at 
> org.jboss.weld.injection.producer.ProducerMethodProducer.produce(ProducerMethodProducer.java:100)
>  at 
> org.jboss.weld.injection.producer.AbstractMemberProducer.produce(AbstractMemberProducer.java:161)
>  at 
> org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:180)
>  at 
> org.jboss.weld.context.unbound.DependentContextImpl.get(DependentContextImpl.java:70)
>  at 
> org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
>  at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:785)
>  at 
> org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:885)
>  at 
> org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
>  at org.jboss.weld.util.Beans.injectBoundFields(Beans.java:358)
>  at org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:369)
>  at 
> org.jboss.weld.injection.producer.ResourceInjector$1.proceed(ResourceInjector.java:70)
>  at 
> org.jboss.weld.injection.InjectionContextImpl.run(InjectionContextImpl.java:48)
>  at 
> org.jboss.weld.injection.producer.ResourceInjector.inject(ResourceInjector.java:72)
>  at 
> org.jboss.weld.injection.producer.BasicInjectionTarget.inject(BasicInjectionTarget.java:117)
>  at 
> 

[jira] [Updated] (HDDS-4345) Delete the deprecated Lock method

2020-10-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4345:
-
Labels: pull-request-available  (was: )

> Delete the deprecated Lock method
> -
>
> Key: HDDS-4345
> URL: https://issues.apache.org/jira/browse/HDDS-4345
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-10-14-19-21-19-096.png, 
> image-2020-10-14-19-21-28-928.png
>
>
>  !image-2020-10-14-19-21-19-096.png! 
>  !image-2020-10-14-19-21-28-928.png! 
> AcquireLock has been Deprecated and replaced by acquireWriteLock. 
> AcquireWriteLock and acquireReadLock are now more recommended.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4343) ReplicationManager.handleOverReplicatedContainer() does not handle unhealthyReplicas properly.

2020-10-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4343:
-
Labels: pull-request-available  (was: )

> ReplicationManager.handleOverReplicatedContainer() does not handle 
> unhealthyReplicas properly.
> --
>
> Key: HDDS-4343
> URL: https://issues.apache.org/jira/browse/HDDS-4343
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
>   // If there are unhealthy replicas, then we should remove them even if 
> it
>   // makes the container violate the placement policy, as excess unhealthy
>   // containers are not really useful. It will be corrected later as a
>   // mis-replicated container will be seen as under-replicated.
>   for (ContainerReplica r : unhealthyReplicas) {
> if (excess > 0) {
>   sendDeleteCommand(container, r.getDatanodeDetails(), true);
>   excess -= 1;
> }
> break;
>   }
>   // After removing all unhealthy replicas, if the container is still over
>   // replicated then we need to check if it is already mis-replicated.
>   // If it is, we do no harm by removing excess replicas. However, if it 
> is
>   // not mis-replicated, then we can only remove replicas if they don't
>   // make the container become mis-replicated.
> {code}
> From the comment, it wants to remove all unhealthy replicas until excess 
> reach 0 ? It should be
> {code:java}
>   for (ContainerReplica r : unhealthyReplicas) {
> if (excess > 0) {
>   sendDeleteCommand(container, r.getDatanodeDetails(), true);
>   excess -= 1;
> } else {
>   break;
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4330) Bootstrap new OM node

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4330:
-
Labels: pull-request-available  (was: )

> Bootstrap new OM node
> -
>
> Key: HDDS-4330
> URL: https://issues.apache.org/jira/browse/HDDS-4330
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> In a ratis enabled OM cluster, add support to bootstrap a new OM node and add 
> it to OM ratis ring. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4327) Potential resource leakage using BatchOperation

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4327:
-
Labels: pull-request-available  (was: )

> Potential resource leakage using BatchOperation
> ---
>
> Key: HDDS-4327
> URL: https://issues.apache.org/jira/browse/HDDS-4327
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> there are a number of places in the code where BatchOperation is used but not 
> closed. As a best practice, better to close them explicitly.
> I have a stress test code that uses BatchOperation to insert into OM rocksdb. 
> Without closing BatchOperation explicitly, the process crashes after just a 
> few minutes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4314) OM Layout Version Manager init throws silent CNF error in integration tests.

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4314:
-
Labels: pull-request-available  (was: )

> OM Layout Version Manager init throws silent CNF error in integration tests.
> 
>
> Key: HDDS-4314
> URL: https://issues.apache.org/jira/browse/HDDS-4314
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> {code}
> org.reflections.ReflectionsException: could not get type for name 
> mockit.MockUp
>   at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:312)
>   at org.reflections.Reflections.expandSuperTypes(Reflections.java:382)
>   at org.reflections.Reflections.(Reflections.java:140)
>   at org.reflections.Reflections.(Reflections.java:182)
>   at org.reflections.Reflections.(Reflections.java:155)
>   at 
> org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.registerOzoneManagerRequests(OMLayoutVersionManagerImpl.java:122)
>   at 
> org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.init(OMLayoutVersionManagerImpl.java:100)
>   at 
> org.apache.hadoop.ozone.om.upgrade.OMLayoutVersionManagerImpl.initialize(OMLayoutVersionManagerImpl.java:83)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:363)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:930)
>   at 
> org.apache.hadoop.ozone.MiniOzoneHAClusterImpl$Builder.createOMService(MiniOzoneHAClusterImpl.java:379)
>   at 
> org.apache.hadoop.ozone.MiniOzoneHAClusterImpl$Builder.build(MiniOzoneHAClusterImpl.java:294)
>   at 
> org.apache.hadoop.ozone.om.TestOzoneManagerHA.init(TestOzoneManagerHA.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.lang.ClassNotFoundException: mockit.MockUp
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:310)
>   ... 23 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4340) Add Operational State to the datanode list command

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4340:
-
Labels: pull-request-available  (was: )

> Add Operational State to the datanode list command
> --
>
> Key: HDDS-4340
> URL: https://issues.apache.org/jira/browse/HDDS-4340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM Client
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> The existing CLI command `ozone admin datanode list` provides output like:
> {code}
> bash-4.2$ ozone admin datanode list
> Datanode: f2b2452a-bf7b-4c6d-b2d6-a0d9d219b21a 
> (/default-rack/172.20.0.8/ozone_datanode_1.ozone_default/2 pipelines) 
> Related pipelines: 
> 16561bc4-746a-4c79-b6f8-1c275b31e37d/THREE/RATIS/OPEN/Leader
> 4e45ff9c-478b-4ab8-a66c-7bfa98c8c632/ONE/RATIS/OPEN/Leader
> Datanode: 57c7fd5f-e32c-4de9-a04a-89d8d4273431 
> (/default-rack/172.20.0.6/ozone_datanode_3.ozone_default/2 pipelines) 
> Related pipelines: 
> 4b24bc61-28cf-471a-893c-a05cac273856/ONE/RATIS/OPEN/Leader
> 16561bc4-746a-4c79-b6f8-1c275b31e37d/THREE/RATIS/OPEN/Follower
> Datanode: 6699fc6d-5c2d-4110-8d88-5ffa5b99f326 
> (/default-rack/172.20.0.3/ozone_datanode_2.ozone_default/2 pipelines) 
> Related pipelines: 
> 16561bc4-746a-4c79-b6f8-1c275b31e37d/THREE/RATIS/OPEN/Follower
> 5ce21cae-9a2d-486d-8b4b-f8ddf75efc61/ONE/RATIS/OPEN/Leader
> {code}
> We should extend this to show the "Operational State" of the node for 
> decommission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4269) Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4269:
-
Labels: newbie pull-request-available  (was: newbie)

> Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS 
> root directory
> 
>
> Key: HDDS-4269
> URL: https://issues.apache.org/jira/browse/HDDS-4269
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 1.1.0
>Reporter: Wei-Chiu Chuang
>Assignee: Zheng Huang-Mu
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Took me some time to debug a trivial bug.
> DataNode crashes after this mysterious error and no explanation:
> {noformat}
> 10:11:44.382 PM   INFOMutableVolumeSetMoving Volume : 
> /var/lib/hadoop-ozone/fake_datanode/data/hdds to failed Volumes
> 10:11:46.287 PM   ERROR   StateContextCritical error occurred in 
> StateMachine, setting shutDownMachine
> 10:11:46.287 PM   ERROR   DatanodeStateMachineDatanodeStateMachine 
> Shutdown due to an critical error
> {noformat}
> Turns out that if there are unexpected files under the hdds directory 
> ($hdds.datanode.dir/hdds), DN thinks the volume is bad and move it to failed 
> volume list, without an error explanation. I was editing the VERSION file and 
> vim created a temp file under the directory. This is impossible to debug 
> without reading the code.
> {code:java|title=HddsVolumeUtil#checkVolume()}
> } else if(hddsFiles.length == 2) {
>   // The files should be Version and SCM directory
>   if (scmDir.exists()) {
> return true;
>   } else {
> logger.error("Volume {} is in Inconsistent state, expected scm " +
> "directory {} does not exist", volumeRoot, scmDir
> .getAbsolutePath());
> return false;
>   }
> } else {
>   // The hdds root dir should always have 2 files. One is Version file
>   // and other is SCM directory.
>   < HERE!
>   return false;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4308) Fix issue with quota update

2020-10-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4308:
-
Labels: pull-request-available  (was: )

> Fix issue with quota update
> ---
>
> Key: HDDS-4308
> URL: https://issues.apache.org/jira/browse/HDDS-4308
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
>
> Currently volumeArgs using getCacheValue and put the same object in 
> doubleBuffer, this might cause issue.
> Let's take the below scenario:
> InitialVolumeArgs quotaBytes -> 1
> 1. T1 -> Update VolumeArgs, and subtracting 1000 and put this updated 
> volumeArgs to DoubleBuffer.
> 2. T2-> Update VolumeArgs, and subtracting 2000 and has not still updated to 
> double buffer.
> *Now at the end of flushing these transactions, our DB should have 7000 as 
> bytes used.*
> Now T1 is picked by double Buffer and when it commits, and as it uses cached 
> Object put into doubleBuffer, it flushes to DB with the updated value from 
> T2(As it is a cache object) and update DB with bytesUsed as 7000.
> And now OM has restarted, and only DB has transactions till T1. (We get this 
> info from TransactionInfo 
> Table(https://issues.apache.org/jira/browse/HDDS-3685)
> Now T2 is again replayed, as it is not committed to DB, now DB will be again 
> subtracted with 2000, and now DB will have 5000.
> But after T2, the value should be 7000, so we have DB in an incorrect state.
> Issue here:
> 1. As we use a cached object and put the same cached object into double 
> buffer this can cause this kind of issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4336) ContainerInfo does not persist BCSID leading to failed replicas reports

2020-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4336:
-
Labels: pull-request-available  (was: )

> ContainerInfo does not persist BCSID leading to failed replicas reports
> ---
>
> Key: HDDS-4336
> URL: https://issues.apache.org/jira/browse/HDDS-4336
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> If you create a container, and then close it, the BCSID is synced on the 
> datanodes and then the value is updated in SCM via setting the "sequenceID" 
> field on the containerInfo object for the container.
> If you later restart just SCM, the sequenceID becomes zero, and then 
> container reports for the replica fail with a stack trace like:
> {code}
> Exception in thread "EventQueue-ContainerReportForContainerReportHandler" 
> java.lang.AssertionError
>   at 
> org.apache.hadoop.hdds.scm.container.ContainerInfo.updateSequenceId(ContainerInfo.java:176)
>   at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.updateContainerStats(AbstractContainerReportHandler.java:108)
>   at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.processContainerReplica(AbstractContainerReportHandler.java:83)
>   at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:162)
>   at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:130)
>   at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:50)
>   at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The assertion here is failing, as it does not allow for the sequenceID to be 
> changed on a CLOSED container:
> {code}
>   public void updateSequenceId(long sequenceID) {
> assert (isOpen() || state == HddsProtos.LifeCycleState.QUASI_CLOSED);
> sequenceId = max(sequenceID, sequenceId);
>   }
> {code}
> The issue seems to be caused by the serialisation and deserialisation of the 
> containerInfo object to protobuf, as sequenceId never persisted or restored.
> However, I am also confused about how this ever worked, as this is a pretty 
> significant problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4318) Disable single node pipeline creation by default in Ozone

2020-10-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4318:
-
Labels: pull-request-available  (was: )

> Disable single node pipeline creation by default in Ozone
> -
>
> Key: HDDS-4318
> URL: https://issues.apache.org/jira/browse/HDDS-4318
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Shashikant Banerjee
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
>
> Currently, single node pipeline creation is ON by default in ozone, though 
> its not used by default in Ozone write path. It would be good to disable this 
> by turning off the config "ozone.scm.pipeline.creation.auto.factor.one" by 
> default. It may lead to some unit test failures and for those tests , this 
> config needs to b explicitly set to true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4296) SCM changes to process Layout Info in heartbeat request/response

2020-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4296:
-
Labels: pull-request-available  (was: )

> SCM changes to process Layout Info in heartbeat request/response
> 
>
> Key: HDDS-4296
> URL: https://issues.apache.org/jira/browse/HDDS-4296
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4311:
-
Labels: pull-request-available  (was: )

> Type-safe config design doc points to OM HA
> ---
>
> Key: HDDS-4311
> URL: https://issues.apache.org/jira/browse/HDDS-4311
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Abstract and links for 
> http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are 
> wrong, reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4325) Incompatible return codes from Ozone getconf -confKey

2020-10-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4325:
-
Labels: pull-request-available  (was: )

> Incompatible return codes from Ozone getconf -confKey
> -
>
> Key: HDDS-4325
> URL: https://issues.apache.org/jira/browse/HDDS-4325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> It seems that the return codes of ozone getconf -confKey are different in 
> prior 1.0 and after 1.0.
> Looking at the code:
> in old code:
> /** Method to be overridden by sub classes for specific behavior. */
> int doWorkInternal(OzoneGetConf tool, String[] args) throws Exception {
> {code:java}
>  String value = tool.getConf().getTrimmed(key);
>  if (value != null) {
>  tool.printOut(value);
>  return 0;
>  }
>  tool.printError("Configuration " + key + " is missing.");
>  return -1;
> }
> {code}
> with 1.0 code:
> @Override
>   public Void call() throws Exception {
> String value = tool.getConf().getTrimmed(confKey);
> if (value != null) {
>   tool.printOut(value);
> } else {
>   tool.printError("Configuration " + confKey + " is missing.");
> }
> return null;
>   }
> We are returning null irrespective of the cases.
> Some applications/tests depending on this codes.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4322) Add integration tests for Decommission and resolve issues detected by the tests

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4322:
-
Labels: pull-request-available  (was: )

> Add integration tests for Decommission and resolve issues detected by the 
> tests
> ---
>
> Key: HDDS-4322
> URL: https://issues.apache.org/jira/browse/HDDS-4322
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> Add a series of integration tests to prove decommission work, and that 
> decommission can survive a restart of SCM.
> As part of adding these tests, some issues were discover that were fixed in 
> the process of debugging the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4319) Compile error with Java 11

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4319:
-
Labels: pull-request-available  (was: )

> Compile error with Java 11
> --
>
> Key: HDDS-4319
> URL: https://issues.apache.org/jira/browse/HDDS-4319
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> {code:title=https://github.com/apache/hadoop-ozone/runs/1217093596#step:6:5632}
> Error:  Failed to execute goal 
> org.codehaus.mojo:aspectj-maven-plugin:1.10:compile (default) on project 
> hadoop-ozone-ozone-manager: Execution default of goal 
> org.codehaus.mojo:aspectj-maven-plugin:1.10:compile failed: Plugin 
> org.codehaus.mojo:aspectj-maven-plugin:1.10 or one of its dependencies could 
> not be resolved: Could not find artifact com.sun:tools:jar:11.0.8 at 
> specified path /opt/hostedtoolcache/jdk/11.0.8/x64/../lib/tools.jar -> [Help 
> 1]
> {code}
> https://github.com/mojohaus/aspectj-maven-plugin/issues/24#issuecomment-419077658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4309) Fix inconsistent Recon config keys that start with "recon.om."

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4309:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix inconsistent Recon config keys that start with "recon.om."
> --
>
> Key: HDDS-4309
> URL: https://issues.apache.org/jira/browse/HDDS-4309
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Affects Versions: 1.0.0
>Reporter: Aravindan Vijayan
>Assignee: François Risch
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> {code}
> hadoop-hdds/common/src/main/resources/ozone-default.xml
> 2318:recon.om.connection.request.timeout
> 2327:recon.om.connection.timeout
> 2336:recon.om.socket.timeout
> 2345:recon.om.snapshot.task.initial.delay
> 2353:recon.om.snapshot.task.interval.delay
> 2361:recon.om.snapshot.task.flush.param
> {code}
> These need to be deprecated and changed to "ozone.recon.om.<>".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4317) Copy UserVolumeInfo to PersistedUserVolumeInfo in interface-storage

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4317:
-
Labels: pull-request-available  (was: )

> Copy UserVolumeInfo to PersistedUserVolumeInfo in interface-storage
> ---
>
> Key: HDDS-4317
> URL: https://issues.apache.org/jira/browse/HDDS-4317
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4316) Upgrade to angular 1.8.0 due to CVE-2020-7676

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4316:
-
Labels: pull-request-available  (was: )

> Upgrade to angular 1.8.0 due to CVE-2020-7676
> -
>
> Key: HDDS-4316
> URL: https://issues.apache.org/jira/browse/HDDS-4316
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Angular versions < 1.8.0 are vulnerable to cross-site scripting
> [https://nvd.nist.gov/vuln/detail/CVE-2020-7676]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4315) Use Epoch to generate unique ObjectIDs

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4315:
-
Labels: pull-request-available  (was: )

> Use Epoch to generate unique ObjectIDs
> --
>
> Key: HDDS-4315
> URL: https://issues.apache.org/jira/browse/HDDS-4315
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> In a non-Ratis OM, the transaction index used to generate ObjectID is reset 
> on OM restart. This can lead to duplicate ObjectIDs when the OM is restarted. 
> ObjectIDs should be unique. 
> For HDDS-2939 and NFS are some of the features which depend on ObjectIds 
> being unique.
> This Jira aims to introduce an epoch number in OM which is incremented on OM 
> restarts. The epoch is persisted on disk. This epoch will be used to set the 
> first 16 bits of the objectID to ensure that objectIDs are unique even after 
> OM restart.
> The highest epoch number is reserved for transactions coming through ratis. 
> This will take care of the scenario where OM ratis is enabled on an existing 
> cluster. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4313) Create thread-local instance of FileSystem in HadoopFsGenerator

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4313:
-
Labels: pull-request-available  (was: )

> Create thread-local instance of FileSystem in HadoopFsGenerator
> ---
>
> Key: HDDS-4313
> URL: https://issues.apache.org/jira/browse/HDDS-4313
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> [~elek]'s most recent performance test uncovered a client-side bottleneck in 
> Freon's Hadoop FS generator: a global {{FileSystem}} instance causes lock 
> contention among test threads.
> https://github.com/elek/ozone-notes/blob/master/static/results/23_hcfs_write/profile.svg



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4312) findbugs check succeeds despite failure

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4312:
-
Labels: pull-request-available  (was: )

> findbugs check succeeds despite failure
> ---
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4310) Ozone getconf broke the compatibility

2020-10-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4310:
-
Labels: pull-request-available  (was: )

> Ozone getconf broke the compatibility
> -
>
> Key: HDDS-4310
> URL: https://issues.apache.org/jira/browse/HDDS-4310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>
> Currently ozone getconf '-confKey' does not work as 'HDDS-3102'  removed the 
> need of prepending  - with options.
> {code:java}
> RUNNING: /opt/cloudera/parcels/CDH/bin/ozone getconf -confKey 
> ozone.om.service.ids 2020-10-05 19:10:09,110|INFO|MainThread|machine.py:180 - 
> run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Unknown options: '-confKey', 
> 'ozone.om.service.ids' 2020-10-05 19:10:09,111|INFO|MainThread|machine.py:180 
> - run()||GUID=8644ce5b-cfe9-4e6b-9b3f-55c29c950489|Possible solutions: -conf
> {code}
> There are some users which did the automation with the commands and this 
> change broke them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4300) Remove no longer needed class DatanodeAdminNodeDetails

2020-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4300:
-
Labels: pull-request-available  (was: )

> Remove no longer needed class DatanodeAdminNodeDetails
> --
>
> Key: HDDS-4300
> URL: https://issues.apache.org/jira/browse/HDDS-4300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> DatanodeAdminNodeDetails was added earlier in the decommission branch, to 
> track metrics and, the decommission state and maintenance end time. 
> After enhancing NodeStatus to old the Maintenance Expiry time, this class is 
> no longer needed and it also duplicates information which is stored in other 
> existing places.
> This change removes it and then metrics etc can be added later in a different 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4266) CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4266:
-
Labels: pull-request-available  (was: )

> CreateFile : store parent dir entries into DirTable and file entry into 
> separate FileTable
> --
>
> Key: HDDS-4266
> URL: https://issues.apache.org/jira/browse/HDDS-4266
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
>
> This task is to handle #createFile ofs client request. Here the idea is to 
> store all the missing parents in the {{keyname}} into 'DirTable' and file 
> into 'FileTable'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4304) Close Container event can fail if pipeline is removed

2020-10-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4304:
-
Labels: pull-request-available  (was: )

> Close Container event can fail if pipeline is removed
> -
>
> Key: HDDS-4304
> URL: https://issues.apache.org/jira/browse/HDDS-4304
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> If you call `pipelineManager.finalizeAndDestroyPipeline()` with 
> onTimeout=false, then the finalizePipeline call will result in a 
> closeContainer event to be fired for every container on the pipeline. These 
> are handled asynchronously.
> However, immediately after that, the `destroyPipeline(...)` call is made. 
> This will remove the pipeline details from the various maps / stores.
> Then the closeContainer events get processed, and they attempt to remove the 
> container from the pipeline. However as the pipeline has already been 
> destroyed, this throws an exception and the close container events never get 
> sent to the DNs:
> {code}
> 2020-10-01 15:44:18,838 
> [EventQueue-CloseContainerForCloseContainerEventHandler] INFO 
> container.CloseContainerEventHandler: Close container Event triggered for 
> container : #2
> 2020-10-01 15:44:18,842 
> [EventQueue-CloseContainerForCloseContainerEventHandler] ERROR 
> container.CloseContainerEventHandler: Failed to close the container #2.
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=59e5ae16-f1fe-45ff-9044-dd237b0e91c6 not found
>   at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removeContainerFromPipeline(PipelineStateMap.java:372)
>   at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removeContainerFromPipeline(PipelineStateManager.java:111)
>   at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removeContainerFromPipeline(SCMPipelineManager.java:413)
>   at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerState(SCMContainerManager.java:352)
>   at 
> org.apache.hadoop.hdds.scm.container.SCMContainerManager.updateContainerState(SCMContainerManager.java:331)
>   at 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler.onMessage(CloseContainerEventHandler.java:66)
>   at 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler.Onmessage(CloseContainerEventHandler.java:45)
>   at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at java.base/java.util.concurrent.ThreadPoolExecutor
> {code}
> The simple solution is to catch the exception and ignore it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4306) Ozone checkstyle rule can't be imported to IntelliJ

2020-10-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4306:
-
Labels: pull-request-available  (was: )

> Ozone checkstyle rule can't be imported to IntelliJ
> ---
>
> Key: HDDS-4306
> URL: https://issues.apache.org/jira/browse/HDDS-4306
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> CheckStyle: Move LineLength Check parent from TreeWalker to Checker, 
> otherwise fail to import to latest IntelliJ
> Similar issue has been reported here and I've verified the fix locally that 
> the IntelliJ can import checkstyle rule after the fix. 
> https://github.com/checkstyle/checkstyle/issues/2116



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4220) BlockManagerImpl#getBlockByID does unnecessary serialization

2020-10-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4220:
-
Labels: pull-request-available  (was: )

> BlockManagerImpl#getBlockByID does unnecessary serialization
> 
>
> Key: HDDS-4220
> URL: https://issues.apache.org/jira/browse/HDDS-4220
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
>
> After HDDS-3869, tables in the datanode handle coding/decoding objects 
> to/from RocksDB, and the caller no longer has to do this manually. As a 
> result, the BlockManagerImpl#getBlockByID method should now return a 
> BlockData type, instead of a byte array. In the current implementation, this 
> method converts the block data into a byte array and returns it to the 
> caller, who then converts the byte array back to block data in order to use 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4302) Shade the org.apache.common.lang3 package as this is coming from other hadoop packages as well.

2020-10-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4302:
-
Labels: pull-request-available  (was: )

> Shade the org.apache.common.lang3 package as this is coming from other hadoop 
> packages as well.
> ---
>
> Key: HDDS-4302
> URL: https://issues.apache.org/jira/browse/HDDS-4302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>
> In one of our duplicate classes tests, we noticed the duplicate classes 
> because of commons-lang3. To avoid class collisions, it's good to shade the 
> common-lang3 package as well.
> java.lang.Exception: Duplicate class 
> 'org.apache.commons.lang3.arch.Processor$Arch.class' detected in 
> '/Users/umagangumalla/Work/repos/Gerrit/xxx/xxx/target/xxx-client-x2.3-dependencies/hadoop-ozone-filesystem-hadoop3-.jar',
>  class is already present in 
> '/Users/umagangumalla/Work/repos/Gerrit/xxx/xxx/target/xxx-client-xxx-dependencies/commons-lang3-3.9.jar'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4301) SCM CA certificate does not encode KeyUsage extension propertly

2020-10-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4301:
-
Labels: pull-request-available  (was: )

> SCM CA certificate does not encode KeyUsage extension propertly
> ---
>
> Key: HDDS-4301
> URL: https://issues.apache.org/jira/browse/HDDS-4301
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Security
>Affects Versions: 1.0.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> This could be problematic with strict security provider such as FIPS. The 
> default non-FIPS provider such as SunJCE and BC provider work fine though. 
> This ticket is opened to fix it. 
> {code:java}
> 2020-09-30 12:01:52,962 ERROR 
> org.apache.hadoop.hdds.security.x509.certificate.authority.DefaultCAServer: 
> Unable to initialize CertificateServer.
> org.apache.hadoop.hdds.security.exception.SCMSecurityException: 
> java.security.cert.CertificateParsingException: cannot construct KeyUsage: 
> java.lang.IllegalArgumentException: illegal object in getInstance: 
> com.safelogic.cryptocomply.asn1.DEROctetString
> at 
> org.apache.hadoop.hdds.security.x509.certificate.utils.CertificateCodec.getPEMEncodedString(CertificateCodec.java:105)
> at 
> org.apache.hadoop.hdds.security.x509.certificate.utils.CertificateCodec.writeCertificate(CertificateCodec.java:182)
> at 
> org.apache.hadoop.hdds.security.x509.certificate.authority.DefaultCAServer.generateRootCertificate(DefaultCAServer.java:495)
> at 
> org.apache.hadoop.hdds.security.x509.certificate.authority.DefaultCAServer.generateSelfSignedCA(DefaultCAServer.java:303)
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4268) Prepare for Upgrade step should purge the log after waiting for the last txn to be applied.

2020-10-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4268:
-
Labels: pull-request-available  (was: )

> Prepare for Upgrade step should purge the log after waiting for the last txn 
> to be applied.
> ---
>
> Key: HDDS-4268
> URL: https://issues.apache.org/jira/browse/HDDS-4268
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
>
> This is a follow up task from HDDS-4227 in which the prepare 
> upgrade/downgrade task should purge the Raft log immediately after waiting 
> for the last txn to be applied. This is to make sure that we dont "apply" 
> transactions in different versions of the code across the quorum. A lagging 
> follower will use a Ratis snapshot to bootstrap itself on restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4271) Avoid logging chunk content in Ozone Insight

2020-10-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4271:
-
Labels: pull-request-available  (was: )

> Avoid logging chunk content in Ozone Insight
> 
>
> Key: HDDS-4271
> URL: https://issues.apache.org/jira/browse/HDDS-4271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-2660 added an insight point for the datanode dispatcher.  At trace level 
> it logs all chunk content, which can be huge and contain control characters, 
> so I think we should avoid it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4299) Display Ratis version with ozone version

2020-10-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4299:
-
Labels: pull-request-available  (was: )

> Display Ratis version with ozone version
> 
>
> Key: HDDS-4299
> URL: https://issues.apache.org/jira/browse/HDDS-4299
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> During the development Ozone uses snapshot releases from Ratis. It can be 
> useful to print out the exact version of the used Ratis as part of the output 
> of "ozone version".
> Ratis versions are part of the jar files since RATIS-1050 
> It can make the testing easier, as it's easier to check which Ratis version 
> is used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4292) Ozone Client not working with Hadoop Version < 3.2

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4292:
-
Labels: pull-request-available  (was: )

> Ozone Client not working with Hadoop Version  < 3.2
> ---
>
> Key: HDDS-4292
> URL: https://issues.apache.org/jira/browse/HDDS-4292
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 1.0.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> HDDS-3560 created new ProxyInfo object in case of IllegalAccessError 
> exception. But, it does not return the new instance and causes NPE in Hadoop 
> versions < 3.2
> {code:java}
> 20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
> protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: 
> Couldn't create RpcClient protocol exception:java.lang.NullPointerException 
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
>  at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
>  at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) 
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
>  at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
>  at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
> org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
> org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
> org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) 
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
> org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
> RpcClient protocol
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4251) Update Ratis version to latest snapshot

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4251:
-
Labels: pull-request-available  (was: )

> Update Ratis version to latest snapshot
> ---
>
> Key: HDDS-4251
> URL: https://issues.apache.org/jira/browse/HDDS-4251
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> This Jira aims to update ozone with latest Ratis snapshot which has a 
> critical fix for OM HA - RATIS-1025.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1880) Decommissioining and maintenance mode in Ozone

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1880:
-
Labels: pull-request-available  (was: )

> Decommissioining and maintenance mode in Ozone 
> ---
>
> Key: HDDS-1880
> URL: https://issues.apache.org/jira/browse/HDDS-1880
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> This is the umbrella jira for decommissioning support in Ozone. Design doc 
> will be attached soon.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4172) Implement Finalize command in Ozone Manager server.

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4172:
-
Labels: pull-request-available  (was: )

> Implement Finalize command in Ozone Manager server.
> ---
>
> Key: HDDS-4172
> URL: https://issues.apache.org/jira/browse/HDDS-4172
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Affects Versions: 1.1.0
>Reporter: Aravindan Vijayan
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Using changes from HDDS-4141 and HDDS-3829, we can finish the OM finalization 
> logic by implementing the Ratis request to Finalize.
> On the server side, this finalize command should update the internal Upgrade 
> state to "Finalized". This operation can be a No-Op if there are no layout 
> changes across an upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4298) Use an interface in Ozone client instead of XceiverClientManager

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4298:
-
Labels: pull-request-available  (was: )

> Use an interface in Ozone client instead of XceiverClientManager
> 
>
> Key: HDDS-4298
> URL: https://issues.apache.org/jira/browse/HDDS-4298
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> XceiverClientManager is used everywhere in the ozone client (Key/Block 
> Input/OutputStream) to get a client when required.
> To make it easier to create genesis/real unit tests, it would be better to 
> use a generic interface instead of XceiverClientManager which can make it 
> easy to replace the manager with a mock implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4297) Allow multiple transactions per container to be sent for deletion by SCM

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4297:
-
Labels: pull-request-available  (was: )

> Allow multiple transactions per container to be sent for deletion by SCM
> 
>
> Key: HDDS-4297
> URL: https://issues.apache.org/jira/browse/HDDS-4297
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
>
> Currently SCM Block Deleting Service allows only one transaction per 
> container to be sent for deletion to the datanode. This can slow down 
> deletion if there are multiple delete transactions for a container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3728) Bucket space: check quotaUsageInBytes when write key

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3728:
-
Labels: pull-request-available  (was: )

> Bucket space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4253) SCM changes to process Layout Info in register request/response

2020-09-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4253:
-
Labels: pull-request-available  (was: )

> SCM changes to process Layout Info in register request/response
> ---
>
> Key: HDDS-4253
> URL: https://issues.apache.org/jira/browse/HDDS-4253
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4287) Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4287:
-
Labels: pull-request-available  (was: )

> Exclude protobuff classes from ozone-filesystem-hadoop3 jars
> 
>
> Key: HDDS-4287
> URL: https://issues.apache.org/jira/browse/HDDS-4287
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>
> Currently Ozone-filesystem-hadoop3 jar including protobuff classes. We are 
> already keeping the dependency on hadoop jars a prerequisite condition. And 
> hadoop will get the protobuf classes along with it's jars. So, getting 
> protobuff jars again with Ozone-filesystem-hadoop3 jar would be just 
> duplication. So, we can exclude that prootobuff classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4285:
-
Labels: pull-request-available  (was: )

> Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4290) Enable insight point for SCM heartbeat protocol

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4290:
-
Labels: pull-request-available  (was: )

> Enable insight point for SCM heartbeat protocol
> ---
>
> Key: HDDS-4290
> URL: https://issues.apache.org/jira/browse/HDDS-4290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> The registration of the already implemented insigh-tpoint seems to be missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4288:
-
Labels: pull-request-available  (was: )

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4282) Improve the emptyDir syntax

2020-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4282:
-
Labels: pull-request-available  (was: )

> Improve the emptyDir syntax
> ---
>
> Key: HDDS-4282
> URL: https://issues.apache.org/jira/browse/HDDS-4282
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Minor
>  Labels: pull-request-available
>
> the usage of emptyDir should be {}
>  
> {code:java}
> emptyDir: null
> emptyDir: {}{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4273) Volume Namespace:`usedNamespace` works by `ozone sh vol info`

2020-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4273:
-
Labels: pull-request-available  (was: )

> Volume Namespace:`usedNamespace` works by `ozone sh vol info`
> -
>
> Key: HDDS-4273
> URL: https://issues.apache.org/jira/browse/HDDS-4273
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4280) Document notable configuration for Recon

2020-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4280:
-
Labels: pull-request-available  (was: )

> Document notable configuration for Recon 
> -
>
> Key: HDDS-4280
> URL: https://issues.apache.org/jira/browse/HDDS-4280
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Recon
>Affects Versions: 1.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: pull-request-available
>
> In [Reon doc 
> link|https://hadoop.apache.org/ozone/docs/1.0.0/feature/recon.html], there is 
> no helpful description about how to quickly setup the Recon server. As Recon 
> is one major feature in Ozone 1.0 version, we need to completed this document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4264) Uniform naming conventions of Ozone Shell Options.

2020-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4264:
-
Labels: pull-request-available  (was: )

> Uniform naming conventions of Ozone Shell Options.
> --
>
> Key: HDDS-4264
> URL: https://issues.apache.org/jira/browse/HDDS-4264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-09-22-14-51-18-968.png
>
>
> Current Shell command of Ozone, some use hump connection, some use '-' 
> connection. We need to unify the naming conventions.
> See the usage [documentation of 
> Picocli|https://picocli.info/#command-methods], which use '-' connection. So 
> I'm going to unify the naming conventions here.
>  !image-2020-09-22-14-51-18-968.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4194) Create a script to check AWS S3 compatibility

2020-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4194:
-
Labels: pull-request-available  (was: )

> Create a script to check AWS S3 compatibility
> -
>
> Key: HDDS-4194
> URL: https://issues.apache.org/jira/browse/HDDS-4194
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Ozone S3G implements the REST interface of AWS S3 protocol. Our robot test 
> based scripts check if it's possible to use Ozone S3 with the AWS client tool.
> But occasionally we should check if our robot test definitions are valid: 
> robot tests should be executed with using real AWS endpoint and bucket(s) and 
> all the test cases should be passed.
> This patch provides a simple shell script to make this cross-check easier.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2660) Create insight point for datanode container protocol

2020-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2660:
-
Labels: pull-request-available  (was: )

> Create insight point for datanode container protocol
> 
>
> Key: HDDS-2660
> URL: https://issues.apache.org/jira/browse/HDDS-2660
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> The goal of this task is to create a new insight point for the datanode 
> container protocol ({{HddsDispatcher}}) to be able to debug 
> {{client<->datanode}} communication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4270:
-
Labels: pull-request-available  (was: )

> Add more reusable byteman scripts to debug ofs/o3fs performance
> ---
>
> Key: HDDS-4270
> URL: https://issues.apache.org/jira/browse/HDDS-4270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> I am using https://byteman.jboss.org to debug the performance of spark + 
> terage with different scripts. Some byteman scripts are already shared by 
> HDDS-4095 or HDDS-342 but it seems to be a good practice to share the newer 
> scripts to make it possible to reproduce performance problems.
> For using byteman with Ozone, see this video:
> https://www.youtube.com/watch?v=_4eYsH8F50E=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o=5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4274) Change the log level of the SCM Delete block to improve performance.

2020-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4274:
-
Labels: pull-request-available  (was: )

> Change the log level of the SCM Delete block to improve performance.
> 
>
> Key: HDDS-4274
> URL: https://issues.apache.org/jira/browse/HDDS-4274
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> There are scenarios in which the DELETE operation is very frequent. Now that 
> info level logs in SCM are affecting performance, we should change this to 
> Debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4272) Volume namespace: add namespaceQuotaUsage and update it when create and delete bucket

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4272:
-
Labels: pull-request-available  (was: )

> Volume namespace: add namespaceQuotaUsage and update it when create and 
> delete bucket
> -
>
> Key: HDDS-4272
> URL: https://issues.apache.org/jira/browse/HDDS-4272
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4242) Copy PrefixInfo proto to new project hadoop-ozone/interface-storage

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4242:
-
Labels: pull-request-available  (was: )

> Copy PrefixInfo proto to new project hadoop-ozone/interface-storage
> ---
>
> Key: HDDS-4242
> URL: https://issues.apache.org/jira/browse/HDDS-4242
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4270:
-
Labels: pull-request-available  (was: )

> Add more reusable byteman scripts to debug ofs/o3fs performance
> ---
>
> Key: HDDS-4270
> URL: https://issues.apache.org/jira/browse/HDDS-4270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> I am using https://byteman.jboss.org to debug the performance of spark + 
> terage with different scripts. Some byteman scripts are already shared by 
> HDDS-4095 or HDDS-342 but it seems to be a good practice to share the newer 
> scripts to make it possible to reproduce performance problems.
> For using byteman with Ozone, see this video:
> https://www.youtube.com/watch?v=_4eYsH8F50E=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o=5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3297) TestOzoneClientKeyGenerator is flaky

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3297:
-
Labels: TriagePending flaky-test ozone-flaky-test pull-request-available  
(was: TriagePending flaky-test ozone-flaky-test)

> TestOzoneClientKeyGenerator is flaky
> 
>
> Key: HDDS-3297
> URL: https://issues.apache.org/jira/browse/HDDS-3297
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Marton Elek
>Priority: Critical
>  Labels: TriagePending, flaky-test, ozone-flaky-test, 
> pull-request-available
> Attachments: 
> org.apache.hadoop.ozone.freon.TestOzoneClientKeyGenerator-output.txt
>
>
> Sometimes it's hanging and stopped after a timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3966) Intermittent crash in TestOMRatisSnapshots

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3966:
-
Labels: pull-request-available  (was: )

> Intermittent crash in TestOMRatisSnapshots
> --
>
> Key: HDDS-3966
> URL: https://issues.apache.org/jira/browse/HDDS-3966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> TestOMRatisSnapshots was recently enabled and is crashing intermittently:
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1690/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1710/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/15/1713/it-hdds-om



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2011) TestRandomKeyGenerator fails due to timeout

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2011:
-
Labels: pull-request-available  (was: )

> TestRandomKeyGenerator fails due to timeout
> ---
>
> Key: HDDS-2011
> URL: https://issues.apache.org/jira/browse/HDDS-2011
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
> Attachments: 
> org.apache.hadoop.ozone.freon.TestRandomKeyGenerator-output.txt
>
>
> {{TestRandomKeyGenerator#bigFileThan2GB}} is failing intermittently due to 
> timeout in Ratis {{appendEntries}}.  Commit on pipeline fails, and new 
> pipeline cannot be created with 2 nodes (there are 5 nodes total).
> Most recent one: 
> https://github.com/elek/ozone-ci/tree/master/trunk/trunk-nightly-pz9vg/integration/hadoop-ozone/tools



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3290) Remove deprecated RandomKeyGenerator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3290:
-
Labels: TriagePending pull-request-available  (was: TriagePending)

> Remove deprecated RandomKeyGenerator
> 
>
> Key: HDDS-3290
> URL: https://issues.apache.org/jira/browse/HDDS-3290
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: TriagePending, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our first Freon test (RandomKeyGenerator) is depracated as we have all the 
> functionalities with a simplified architecture (BaseFreonGenerator). We can 
> remove it (especially as it's flaky...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4263) ReplicatiomManager shouldn't retain one healthy replica per origin node Id.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4263:
-
Labels: pull-request-available  (was: )

> ReplicatiomManager shouldn't retain one healthy replica per origin node Id.
> ---
>
> Key: HDDS-4263
> URL: https://issues.apache.org/jira/browse/HDDS-4263
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
>
> ReplicationManager now retain one healthy replica for per origin node id, so 
> if there are 5 replicas and all of them are birth in different node, 
> replicatiomManager won't reduce the replica for this container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4222) [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4222:
-
Labels: pull-request-available  (was: )

> [OzoneFS optimization] Provide a mechanism for efficient path lookup
> 
>
> Key: HDDS-4222
> URL: https://issues.apache.org/jira/browse/HDDS-4222
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone FS Optimizations - Efficient Lookup using cache.pdf
>
>
> With the new file system HDDS-2939 like semantics design it requires multiple 
> DB lookups to traverse the path component in top-down fashion. This task to 
> discuss use cases and proposals to reduce the performance penalties during 
> path lookups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4262) Use ClientID and CallID from Rpc Client to detect retry requests

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4262:
-
Labels: pull-request-available  (was: )

> Use ClientID and CallID from Rpc Client to detect retry requests
> 
>
> Key: HDDS-4262
> URL: https://issues.apache.org/jira/browse/HDDS-4262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
> Environment: U
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Use clientID and callID to uniquely identify the requests.
> This will help in case when the request is retried for write requests, when 
> the previous one is already processed, the previous result can be returned 
> from the cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3947) Sort DNs for client when the key is a file for #getFileStatus #listStatus APIs

2020-09-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3947:
-
Labels: pull-request-available  (was: )

> Sort DNs for client when the key is a file for #getFileStatus #listStatus APIs
> --
>
> Key: HDDS-3947
> URL: https://issues.apache.org/jira/browse/HDDS-3947
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Similar to {{OzoneManagerFS#lookupFile(OmKeyArgs args, String 
> clientAddress)}}, it would be good to sort the DNs for the given client 
> address when the key is a file.
> Thanks [~xyao] for your comments in 
> [HDDS-3824-PR|https://github.com/apache/hadoop-ozone/pull/1164].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4122) Implement OM Delete Expired Open Key Request and Response

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4122:
-
Labels: pull-request-available  (was: )

> Implement OM Delete Expired Open Key Request and Response
> -
>
> Key: HDDS-4122
> URL: https://issues.apache.org/jira/browse/HDDS-4122
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
>
> Create an OM request and response that allows moving open keys from the open 
> key table to the deleted table in OM HA. The request portion of this 
> operation, which updates the open key table cache, will use a bucket lock.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3727) Volume space: check quotaUsageInBytes when write key

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3727:
-
Labels: pull-request-available  (was: )

> Volume space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3727
> URL: https://issues.apache.org/jira/browse/HDDS-3727
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4104) Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4104:
-
Labels: pull-request-available  (was: )

> Provide a way to get the default value and key of java-based-configuration 
> easily
> -
>
> Key: HDDS-4104
> URL: https://issues.apache.org/jira/browse/HDDS-4104
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> - getDefaultValue
> - getKeyName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4255:
-
Labels: pull-request-available  (was: )

> Remove unused Ant and Jdiff dependency versions
> ---
>
> Key: HDDS-4255
> URL: https://issues.apache.org/jira/browse/HDDS-4255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> Versions of Ant and JDiff are not used in ozone project, but we have some 
> version declaration (inherited from the Hadoo parent pom which was used as a 
> base for the main pom.xml).
> As the (unused) ANT version has security issues, I would remove them to avoid 
> any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4252) Add the current layout versions to DN - SCM proto payload.

2020-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4252:
-
Labels: pull-request-available  (was: )

> Add the current layout versions to DN - SCM proto payload.
> --
>
> Key: HDDS-4252
> URL: https://issues.apache.org/jira/browse/HDDS-4252
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4254) Bucket space: add usedBytes and update it when create and delete key

2020-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4254:
-
Labels: pull-request-available  (was: )

> Bucket space: add usedBytes and update it when create and delete key
> 
>
> Key: HDDS-4254
> URL: https://issues.apache.org/jira/browse/HDDS-4254
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>
> In addition, the current Quota setting does not take effect. HDDS-541 gives 
> all the work needed to perfect Quota.
> This PR is a subtask of HDDS-541.
> First, we increase usedBytes of Bucket. Later, we will judge whether the 
> Bucket can be written based on this when we write the key.(Volume has 
> implemented this, and this PR is based on HDDS-4053)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4227) Implement a "prepareForUpgrade" step that applies all committed transactions onto the OM state machine.

2020-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4227:
-
Labels: pull-request-available  (was: )

> Implement a "prepareForUpgrade" step that applies all committed transactions 
> onto the OM state machine.
> ---
>
> Key: HDDS-4227
> URL: https://issues.apache.org/jira/browse/HDDS-4227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> *Why is this needed?*
> Through HDDS-4143, we have a generic factory to handle multiple versions of 
> apply transaction implementations based on layout version. Hence, this 
> factory can be used to handle versioned requests across layout versions, 
> whenever both the versions need to exist in the code (Let's say for 
> HDDS-2939). 
> However, it has been noticed that the OM ratis requests are still undergoing 
> lot of minor changes (HDDS-4007, HDDS-4007, HDDS-3903), and in these cases it 
> will become hard to maintain 2 versions of the code just to support clean 
> upgrades. 
> Hence, the plan is to build a pre-upgrade utility (client API) that makes 
> sure that an OM instance has no "un-applied" transactions in this Raft log. 
> Invoking this client API makes sure that the upgrade starts with a clean 
> state. Of course, this would be needed only in a HA setup. In a non HA setup, 
> this can either be skipped, or when invoked will be a No-Op (Non Ratis) or 
> cause no harm (Single node Ratis).
> *How does it work?*
> Before updating the software bits, our goal is to get OMs to get to the  
> latest state with respect to apply transaction. The reason we want this is to 
> make sure that the same version of the code executes the AT step in all the 3 
> OMs. In a high level, the flow will be as follows.
> * Before upgrade, *stop* the OMs.
> * Start OMs with a special flag --prepareUpgrade (This is something like 
> --init,  which is a special state which stops the ephemeral OM instance after 
> doing some work)
> * When OM is started with the --prepareUpgrade flag, it does not start the 
> RPC server, so no new requests can get in.
> * In this state, we give every OM time to apply txn until the last txn.
> * We know that at least 2 OMs would have gotten the last client request 
> transaction committed into their log. Hence, those 2 OMs are expected to 
> apply transaction to that index faster.
> * At every OM, the Raft log will be purged after this wait period (so that 
> the replay does not happen), and a Ratis snapshot taken at last txn.
> * Even if there is a lagger OM which is unable to get to last applied txn 
> index, its logs will be purged after the wait time expires.
> * Now when OMs are started with newer version, all the OMs will start using 
> the new code.
> * The lagger OM will get the new Ratis snapshot since there are no logs to 
> replay from.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4250) Fix wrong logger name

2020-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4250:
-
Labels: pull-request-available  (was: )

> Fix wrong logger name
> -
>
> Key: HDDS-4250
> URL: https://issues.apache.org/jira/browse/HDDS-4250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Xie Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Fix wrong logger name, the logger name doesn't match the class name.
> example
> {code:java}
> public class OMBucketSetAclRequest extends OMBucketAclRequest {
>   private static final Logger LOG =
>   LoggerFactory.getLogger(OMBucketAddAclRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4192) enable SCM Raft Group based on config ozone.scm.names

2020-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4192:
-
Labels: pull-request-available  (was: )

> enable SCM Raft Group based on config ozone.scm.names
> -
>
> Key: HDDS-4192
> URL: https://issues.apache.org/jira/browse/HDDS-4192
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Major
>  Labels: pull-request-available
>
>  
> Say ozone.scm.names is "ip1,ip2,ip3", scm with ip1 identifies its RaftPeerId 
> as scm1,  scm with ip2 identifies its RaftPeerId as scm2, scm with ip3 
> identifies its RaftPeerId as scm3. They will automatically become a raft 
> group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >