[GitHub] [hadoop-ozone] dineshchitlangia commented on issue #77: HDDS-2354. SCM log is full of AllocateBlock logs.

2019-10-23 Thread GitBox
dineshchitlangia commented on issue #77: HDDS-2354. SCM log is full of 
AllocateBlock logs.
URL: https://github.com/apache/hadoop-ozone/pull/77#issuecomment-545746468
 
 
   @bharatviswa504 thanks for the contribution. Merged this to master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia merged pull request #77: HDDS-2354. SCM log is full of AllocateBlock logs.

2019-10-23 Thread GitBox
dineshchitlangia merged pull request #77: HDDS-2354. SCM log is full of 
AllocateBlock logs.
URL: https://github.com/apache/hadoop-ozone/pull/77
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2297) Enable Opentracing for new Freon tests

2019-10-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2297.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Enable Opentracing for new Freon tests
> --
>
> Key: HDDS-2297
> URL: https://issues.apache.org/jira/browse/HDDS-2297
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-2022 introduced new freon tests, but the initial root span of 
> opentracing is not created before the test execution. We need to enable 
> opentracing to get better view about the executions of the new freon test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #16: HDDS-2297. Enable Opentracing for new Freon tests

2019-10-23 Thread GitBox
bharatviswa504 commented on issue #16: HDDS-2297. Enable Opentracing for new 
Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/16#issuecomment-545743215
 
 
   Thank You @elek for the contribution and @adoroszlai for the review.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #16: HDDS-2297. Enable Opentracing for new Freon tests

2019-10-23 Thread GitBox
bharatviswa504 merged pull request #16: HDDS-2297. Enable Opentracing for new 
Freon tests
URL: https://github.com/apache/hadoop-ozone/pull/16
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2355) Om double buffer flush termination with rocksdb error

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2355:


 Summary: Om double buffer flush termination with rocksdb error
 Key: HDDS-2355
 URL: https://issues.apache.org/jira/browse/HDDS-2355
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


om_1    |java.io.IOException: Unable to write the batch.
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:48|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:48/])
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBStore.commitBatchOperation(RDBStore.java:240|http://org.apache.hadoop.hdds.utils.db.rdbstore.commitbatchoperation%28rdbstore.java:240/])
om_1    |at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:146)
om_1    |at java.base/java.lang.Thread.run(Thread.java:834)
om_1    |Caused by: org.rocksdb.RocksDBException: 
WritePrepared/WriteUnprepared txn tag when write_after_commit_ is enabled (in 
default WriteCommitted mode). If it is not due to corruption, the WAL must be 
emptied before changing the WritePolicy.
om_1    |at org.rocksdb.RocksDB.write0(Native Method)
om_1    |at org.rocksdb.RocksDB.write(RocksDB.java:1421)
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:46|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:46/])
 
In few of my test run's i see this error and OM is terminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2307) ContextFactory.java contains Windows '^M" at end of each line

2019-10-23 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen resolved HDDS-2307.
--
Resolution: Not A Problem

> ContextFactory.java contains Windows '^M" at end of each line
> -
>
> Key: HDDS-2307
> URL: https://issues.apache.org/jira/browse/HDDS-2307
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie
>
> Covert the file to Unix format. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on issue #77: HDDS-2354. SCM log is full of AllocateBlock logs.

2019-10-23 Thread GitBox
bharatviswa504 edited a comment on issue #77: HDDS-2354. SCM log is full of 
AllocateBlock logs.
URL: https://github.com/apache/hadoop-ozone/pull/77#issuecomment-545727920
 
 
   Yes, as this is log only change. But let's wait till CI is completed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #77: HDDS-2354. SCM log is full of AllocateBlock logs.

2019-10-23 Thread GitBox
bharatviswa504 commented on issue #77: HDDS-2354. SCM log is full of 
AllocateBlock logs.
URL: https://github.com/apache/hadoop-ozone/pull/77#issuecomment-545727920
 
 
   Yes, as this is log only change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #77: HDDS-2354. SCM log is full of AllocateBlock logs.

2019-10-23 Thread GitBox
bharatviswa504 opened a new pull request #77: HDDS-2354. SCM log is full of 
AllocateBlock logs.
URL: https://github.com/apache/hadoop-ozone/pull/77
 
 
   ## What changes were proposed in this pull request?
   
   Make the below log statement debug log.
   2019-10-24 03:17:43,087 INFO server.SCMBlockProtocolServer: Allocating 1 
blocks of size 268435456, with ExcludeList {datanodes = [], containerIds = [], 
pipelineIds = []}
   
   scm_1   | 2019-10-24 03:17:43,088 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList {datanodes = [], 
containerIds = [], pipelineIds = []}
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2354
   
   ## How was this patch tested?
   
   Ran docker-compose cluster to verify.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2354) SCM log is full of AllocateBlock logs

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2354:


 Summary: SCM log is full of AllocateBlock logs
 Key: HDDS-2354
 URL: https://issues.apache.org/jira/browse/HDDS-2354
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


2019-10-24 03:17:43,087 INFO server.SCMBlockProtocolServer: Allocating 1 blocks 
of size 268435456, with ExcludeList \{datanodes = [], containerIds = [], 
pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,088 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,089 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,093 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r338361688
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
 ##
 @@ -76,11 +80,46 @@ public PipelinePlacementPolicy(
* Returns true if this node meets the criteria.
*
* @param datanodeDetails DatanodeDetails
+   * @param nodesRequired nodes required count
* @return true if we have enough space.
*/
   @VisibleForTesting
-  boolean meetCriteria(DatanodeDetails datanodeDetails, long heavyNodeLimit) {
-return (nodeManager.getPipelinesCount(datanodeDetails) <= heavyNodeLimit);
+  boolean meetCriteria(DatanodeDetails datanodeDetails, int nodesRequired) {
+if (heavyNodeCriteria == 0) {
+  // no limit applied.
+  return true;
+}
+// Datanodes from pipeline in some states can also be considered available
+// for pipeline allocation. Thus the number of these pipeline shall be
+// deducted from total heaviness calculation.
+int pipelineNumDeductable = 0;
+Set pipelines = nodeManager.getPipelines(datanodeDetails);
+for (PipelineID pid : pipelines) {
+  Pipeline pipeline;
+  try {
+pipeline = stateManager.getPipeline(pid);
+  } catch (PipelineNotFoundException e) {
+LOG.error("Pipeline not found in pipeline state manager during" +
+" pipeline creation. PipelineID: " + pid +
+" exception: " + e.getMessage());
+continue;
 
 Review comment:
   Not from test, but it's an exception thrown from stateManager.getPipeline 
method. So I figured adding some error messages for it because the pipeline 
creation won't stop here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r338361334
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,45 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (factor != ReplicationFactor.THREE) {
+  // Only put limits for Factor THREE pipelines.
+  return false;
+}
+// Per datanode limit
+if (heavyNodeCriteria > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
 
 Review comment:
   Other pipelines like STALE pipelines would possibly turn into OPEN at any 
time. So here I just deduct out CLOSED pipelines.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] chimney-lee edited a comment on issue #74: HDDS-2348.Remove log4j properties for package org.apache.hadoop.ozone

2019-10-23 Thread GitBox
chimney-lee edited a comment on issue #74: HDDS-2348.Remove log4j properties 
for package org.apache.hadoop.ozone
URL: https://github.com/apache/hadoop-ozone/pull/74#issuecomment-545710700
 
 
   Thanks for reply @elek , In hadoop ,the namenode/datanode logs are logged to 
files hadoop-${user}-{namenode|datanode}-${host}.log,client log to console, 
audit log to hdfs-audit.log. why  need to write log in package 
org.apache.hadoop.ozone to one file ?
   what i run into : when i start OM,  the log produced by OM cannot be all 
logged to file hadoop-${user}-om-${host}.log , for the config 
`log4j.logger.org.apache.hadoop.ozone=DEBUG,OZONE,FILE ` in log4j, the class 
`org.apache.hadoop.ozone.om.OzoneManagerStarter`  log the message to file 
hadoop-${user}-om-${host}.out and ozone.log but not 
hadoop-${user}-om-${host}.log, i think here is unreasonable. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] chimney-lee commented on issue #74: HDDS-2348.Remove log4j properties for package org.apache.hadoop.ozone

2019-10-23 Thread GitBox
chimney-lee commented on issue #74: HDDS-2348.Remove log4j properties for 
package org.apache.hadoop.ozone
URL: https://github.com/apache/hadoop-ozone/pull/74#issuecomment-545710700
 
 
   Thanks for reply @elek , In hadoop ,the namenode/datanode logs are logged to 
files hadoop-${user}-{namenode|datanode}-${host}.log,client log to console, 
audit log to hdfs-audit.log. why  need to write log in package 
org.apache.hadoop.ozone to one file ?
   what i run into : when i start OM  the log produced by OM cannot be all 
logged to file hadoop-${user}-om-${host}.log ,for the config 
`log4j.logger.org.apache.hadoop.ozone=DEBUG,OZONE,FILE ` in log4j, the class 
`org.apache.hadoop.ozone.om.OzoneManagerStarter`  log the message to file 
hadoop-${user}-om-${host}.out and ozone.log but not 
hadoop-${user}-om-${host}.log, i think here is unreasonable. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-10-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/484/

No changes

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread Konstantin Shvachko
+1 on RC1

- Verified signatures
- Verified maven artifacts on Nexus for sources
- Checked rat reports
- Checked documentation
- Checked packaging contents
- Built from sources on RHEL 7 box
- Ran unit tests for new HDFS features with Java 8

Thanks,
--Konstantin

On Tue, Oct 22, 2019 at 2:55 PM Jonathan Hung  wrote:

> Hi folks,
>
> This is the second release candidate for the first release of Apache Hadoop
> 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> features such as:
>
> - User-defined resource types
> - Native GPU support as a schedulable resource type
> - Consistent reads from standby node
> - Namenode port based selective encryption
> - Improvements related to rolling upgrade support from 2.x to 3.x
> - Cost based fair call queue
>
> The RC1 artifacts are at: http://home.apache.org/~jhung/hadoop-2.10.0-RC1/
>
> RC tag is release-2.10.0-RC1.
>
> The maven artifacts are hosted here:
> https://repository.apache.org/content/repositories/orgapachehadoop-1243/
>
> My public key is available here:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm PDT.
>
> Thanks,
> Jonathan Hung
>
> [1]
>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.0%20AND%20fixVersion%20not%20in%20(2.9.2%2C%202.9.1%2C%202.9.0)
>


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338340651
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -171,7 +171,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setSetVolumePropertyResponse(
   SetVolumePropertyResponse.newBuilder().build());
   omClientResponse = new OMVolumeSetOwnerResponse(oldOwner,
-  oldOwnerVolumeList, newOwnerVolumeList, omVolumeArgs,
+  oldOwnerVolumeList, newOwnerVolumeList,
+  (OmVolumeArgs) omVolumeArgs.clone(),
 
 Review comment:
   https://issues.apache.org/jira/browse/HDDS-2322
   The issue is seen with Key Operations, I think we might see the same issue 
with Other Operations too. Let's take a case where RemoveAcl result 
OMVolumeArgs is submitted to flushThread and other thread which is performing 
removeAcl is updating the same OmVolumeArgs on this(Like we have a list and map 
internally for acls). Then when we might see ConcurrentModificationException. 
So to cover these kinds of scenario's updated all the places where we submit 
the response to doubleBuffer flush threads.
   
   And also one more thing is if we use same object, when flush happens we 
might update the entries to rocksdb when some other operation is changing the 
same entry and it still submitted/not submitted to doublebuffer flush thread. 
To have a cleaner way all entries submitted to doubleBuffer flush are made as 
immutable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338340651
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -171,7 +171,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setSetVolumePropertyResponse(
   SetVolumePropertyResponse.newBuilder().build());
   omClientResponse = new OMVolumeSetOwnerResponse(oldOwner,
-  oldOwnerVolumeList, newOwnerVolumeList, omVolumeArgs,
+  oldOwnerVolumeList, newOwnerVolumeList,
+  (OmVolumeArgs) omVolumeArgs.clone(),
 
 Review comment:
   https://issues.apache.org/jira/browse/HDDS-2322
   The issue is seen with Key Operations, I think we might see the same issue 
with Other Operations too. Let's take a case where RemoveAcl result 
OMVolumeArgs is submitted to flushThread and other thread which is performing 
removeAcl is updating the same OmVolumeArgs on this(Like we have a list and map 
internally for acls). Then when we might see ConcurrentModificationException. 
So to cover these kinds of scenario's updated all the places where we submit 
the response to doubleBuffer flush threads.
   
   And also one more thing is if we use same object, when flush happens we 
might flush these entries to rocksdb when some other operation is changing the 
same entry which might be still submitted/not submitted to doublebuffer flush 
thread. To have a cleaner way all entries submitted to doubleBuffer flush are 
made as immutable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338340651
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -171,7 +171,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setSetVolumePropertyResponse(
   SetVolumePropertyResponse.newBuilder().build());
   omClientResponse = new OMVolumeSetOwnerResponse(oldOwner,
-  oldOwnerVolumeList, newOwnerVolumeList, omVolumeArgs,
+  oldOwnerVolumeList, newOwnerVolumeList,
+  (OmVolumeArgs) omVolumeArgs.clone(),
 
 Review comment:
   https://issues.apache.org/jira/browse/HDDS-2322
   The issue is seen with Key Operations, I think we might see the same issue 
with Other Operations too. Let's take a case where RemoveAcl submitted 
OMVolumeArgs to flushThread and other thread like removeAcl is updating same 
OmVolumeArgs on this. Then when we might see ConcurrentModificationException. 
So to cover these kinds of scenario's updated all the places where we submit 
the response to doubleBuffer flush threads.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338340651
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -171,7 +171,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setSetVolumePropertyResponse(
   SetVolumePropertyResponse.newBuilder().build());
   omClientResponse = new OMVolumeSetOwnerResponse(oldOwner,
-  oldOwnerVolumeList, newOwnerVolumeList, omVolumeArgs,
+  oldOwnerVolumeList, newOwnerVolumeList,
+  (OmVolumeArgs) omVolumeArgs.clone(),
 
 Review comment:
   https://issues.apache.org/jira/browse/HDDS-2322
   The issue is seen with Key Operations, I think we might see the same issue 
with Other Ops too. Let's take a case where RemoveAcl submitted OMVolumeArgs to 
flushThread and other thread like removeAcl is updating same OmVolumeArgs on 
this. Then when we might see ConcurrentModificationException. So to cover these 
kinds of scenario's updated all the places where we submit the response to 
doubleBuffer flush threads.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #15: HDDS-2296. ozoneperf compose cluster shouln't start freon by default

2019-10-23 Thread GitBox
bharatviswa504 merged pull request #15: HDDS-2296. ozoneperf compose cluster 
shouln't start freon by default
URL: https://github.com/apache/hadoop-ozone/pull/15
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #15: HDDS-2296. ozoneperf compose cluster shouln't start freon by default

2019-10-23 Thread GitBox
bharatviswa504 commented on issue #15: HDDS-2296. ozoneperf compose cluster 
shouln't start freon by default
URL: https://github.com/apache/hadoop-ozone/pull/15#issuecomment-545693509
 
 
   Thank You @elek for the contribution and @adoroszlai for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #76: HDDS-2352. Client gets internal error instead of volume not found in secure cluster

2019-10-23 Thread GitBox
bharatviswa504 merged pull request #76: HDDS-2352. Client gets internal error 
instead of volume not found in secure cluster
URL: https://github.com/apache/hadoop-ozone/pull/76
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #76: HDDS-2352. Client gets internal error instead of volume not found in secure cluster

2019-10-23 Thread GitBox
bharatviswa504 commented on issue #76: HDDS-2352. Client gets internal error 
instead of volume not found in secure cluster
URL: https://github.com/apache/hadoop-ozone/pull/76#issuecomment-545688874
 
 
   Thank You @adoroszlai for the fix and @dineshchitlangia for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-14927) RBF: Add metrics for active RPC client threads

2019-10-23 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon Gao reopened HDFS-14927:
-

Reopen to gather more info

> RBF: Add metrics for active RPC client threads
> --
>
> Key: HDFS-14927
> URL: https://issues.apache.org/jira/browse/HDFS-14927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
>
> It is good to add some monitoring on the active RPC client threads, so we 
> know the utilization and when to bump up 
> `dfs.federation.router.client.thread-size`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r338324997
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
 ##
 @@ -73,8 +75,11 @@
   public static void init() throws Exception {
 conf = new OzoneConfiguration();
 conf.set(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT, "1");
+conf.setInt(OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT, 2);
 
 Review comment:
   I'm fine with current approach. I'm asking because I see a few other mini 
ozone cluster based tests require similar settings, adding a wrapper can make 
this easier. But we can do that as a follow up jira.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r338324646
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,45 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (factor != ReplicationFactor.THREE) {
+  // Only put limits for Factor THREE pipelines.
+  return false;
+}
+// Per datanode limit
+if (heavyNodeCriteria > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) > heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY) /
+  factor.getNumber();
+}
+
+// Global limit
+if (pipelineNumberLimit > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS,
+  ReplicationFactor.THREE).size() - stateManager.getPipelines(
+  ReplicationType.RATIS, ReplicationFactor.THREE,
+  Pipeline.PipelineState.CLOSED).size()) >
+  (pipelineNumberLimit - stateManager.getPipelines(
 
 Review comment:
   Make sense to me. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r338324469
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2PipelineMap.java
 ##
 @@ -71,6 +71,10 @@ public synchronized void addPipeline(Pipeline pipeline) {
   UUID dnId = details.getUuid();
   dn2ObjectMap.computeIfAbsent(dnId, k -> ConcurrentHashMap.newKeySet())
   .add(pipeline.getId());
+  dn2ObjectMap.computeIfPresent(dnId, (k, v) -> {
+v.add(pipeline.getId());
 
 Review comment:
   Correct me if I'm wrong, dn2ObjectMap.computeIfAbsent(dnId, k -> 
ConcurrentHashMap.newKeySet()) ensures for a dnId we will have a KeySet().
   
   Then the returned value (KeySet) allows line 73 to continue the operation to 
add pipeline id with a .add(pipeline.getId).
   
   JDK document can be found here: 
https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#computeIfAbsent-K-java.util.function.Function-


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2353) Cleanup old write-path code in OM

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2353:


 Summary: Cleanup old write-path code in OM
 Key: HDDS-2353
 URL: https://issues.apache.org/jira/browse/HDDS-2353
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


This Jira is to cleanup old write path code in OM. As the newly added 
request/response code is also used for non-HA, we can cleanup the old code. And 
also this integrated code is tested for few days. So, this will be good time to 
cleanup old code. Cleaning up old code is also causing trouble for some patches 
fixing write path now they need to update in 2 places.(Because if we change a 
constructor, then it will require change in 2 places)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
xiaoyuyao commented on a change in pull request #71: HDDS-2344. Add immutable 
entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338320989
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetOwnerRequest.java
 ##
 @@ -171,7 +171,8 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setSetVolumePropertyResponse(
   SetVolumePropertyResponse.newBuilder().build());
   omClientResponse = new OMVolumeSetOwnerResponse(oldOwner,
-  oldOwnerVolumeList, newOwnerVolumeList, omVolumeArgs,
+  oldOwnerVolumeList, newOwnerVolumeList,
+  (OmVolumeArgs) omVolumeArgs.clone(),
 
 Review comment:
   It is not clear to me why an extra clone of omVolumeArgs is needed here. 
omVolumeArgs is just created a few line above in Line 129. Since it is not an 
argument provided by the caller and not handed over to other clients, can you 
post stack tracing showing concurrent modification exception?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



HDFS sync

2019-10-23 Thread Wei-Chiu Chuang
Hi folks,

I don't want to add meetings unnecessarily. But I would like to see the
oncall engineer of the week to join and summarize the support cases (CDH
engineering escalations, HDP EARs). Ideally any one with a case open should
join too, but let's start with this and see how it goes. Yes I am using
this trick to force you to make more progress on support cases :)

Additionally, @Siyao Meng  will spend a few minutes in
this week's HDFS sync to talk about/demo HDFS Dynamometer. So hopefully
you'll find it useful in the future.


Reminder: APAC Hadoop storage community sync

2019-10-23 Thread Wei-Chiu Chuang
PDT 10pm Wednesday = tonight, CST 1pm Thursday = today.
Feel free to join Zoom and chat.

Join Zoom Meeting

https://cloudera.zoom.us/j/880548968

Past sessions:
https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit

Also heads-up,
On November 20/21, Feilong from Intel will present his work on HDFS-13762
(Support non-volatile storage class memory(SCM) in HDFS cache directives).
This is going to happen in the APAC storage community sync.

Best,
Weichiu


[jira] [Resolved] (HDFS-14927) RBF: Add metrics for active RPC client threads

2019-10-23 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon Gao resolved HDFS-14927.
-
Resolution: Invalid

> RBF: Add metrics for active RPC client threads
> --
>
> Key: HDFS-14927
> URL: https://issues.apache.org/jira/browse/HDFS-14927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
>
> It is good to add some monitoring on the active RPC client threads, so we 
> know the utilization and when to bump up 
> `dfs.federation.router.client.thread-size`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #51: HDDS-2311. Fix logic of RetryPolicy in OzoneClientSideTranslatorPB.

2019-10-23 Thread GitBox
hanishakoneru commented on a change in pull request #51: HDDS-2311. Fix logic 
of RetryPolicy in OzoneClientSideTranslatorPB.
URL: https://github.com/apache/hadoop-ozone/pull/51#discussion_r338301299
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -414,7 +414,7 @@
   public static final String OZONE_CLIENT_RETRY_MAX_ATTEMPTS_KEY =
   "ozone.client.retry.max.attempts";
   public static final int OZONE_CLIENT_RETRY_MAX_ATTEMPTS_DEFAULT =
-  10;
+  30;
 
 Review comment:
   I just thought we should try more number of times on each OM (10 on each OM 
for a 3 node OM cluster). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Wangda Tan
We're going to fix the Submarine email list issues once spin-off works
start

On Wed, Oct 23, 2019 at 2:39 PM Matt Foley 
wrote:

> Definitely yes on ‘ozone-issues’.  Whether we want to keep ozone-dev and
> hdfs-dev together or separate, I’m neutral.
> Thanks,
> —Matt
>
> On Oct 23, 2019, at 2:11 PM, Elek, Marton  wrote:
>
> Thanks to report this problem Rohith,
>
> Yes, it seems to be configured with the wrong mailing list.
>
> I think the right fix is to create ozone-dev@ and ozone-issues@ and use
> them instead of hdfs-(dev/issues).
>
> Is there any objections against creating new ozone-* mailing lists?
>
> Thanks,
> Marton
>
>
> On 10/21/19 6:03 AM, Rohith Sharma K S wrote:
> > + common/yarn and mapreduce/submarine
> > Looks like same issue in submarine repository also !
> > On Mon, 21 Oct 2019 at 09:30, Rohith Sharma K S <
> rohithsharm...@apache.org>
> > wrote:
> >> Folks,
> >>
> >> In Hadoop world, any mailing list has its own purposes as below
> >> 1. hdfs/common/yarn/mapreduce-*dev *mailing list is meant for developer
> >> discussion purpose.
> >> 2. hdfs/common/yarn/mapreduce*-issues* mailing list used for comments
> >> made in the issues.
> >>
> >>  It appears Hadoop-Ozone repository configured *hdfs-dev *mailing list
> >> for *hdfs-issues* list also. As a result hdfs-dev mailing list is
> >> bombarded with every comment made in hadoop-ozone repository.
> >>
> >>
> >> Could it be fixed?
> >>
> >> -Rohith Sharma K S
> >>
> >>
> >>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>


[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #51: HDDS-2311. Fix logic of RetryPolicy in OzoneClientSideTranslatorPB.

2019-10-23 Thread GitBox
hanishakoneru commented on a change in pull request #51: HDDS-2311. Fix logic 
of RetryPolicy in OzoneClientSideTranslatorPB.
URL: https://github.com/apache/hadoop-ozone/pull/51#discussion_r338300716
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
 ##
 @@ -277,6 +269,22 @@ private RetryAction getRetryAction(RetryAction 
fallbackAction,
 return proxy;
   }
 
+  /**
+   * Check if exception is a NotLeaderException
+   * @return NotLeaderException.
+   */
+  private NotLeaderException getNotLeaderException(Exception exception) {
+Throwable cause = exception.getCause();
+if (cause instanceof RemoteException) {
 
 Review comment:
   Updated to check that cause is not null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #9: HDDS-2240. Command line tool for OM Admin

2019-10-23 Thread GitBox
hanishakoneru commented on issue #9: HDDS-2240. Command line tool for OM Admin
URL: https://github.com/apache/hadoop-ozone/pull/9#issuecomment-545653313
 
 
   @anuengineer, I fixed the compile failure. Please take a look when you get a 
chance. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #9: HDDS-2240. Command line tool for OM Admin

2019-10-23 Thread GitBox
hanishakoneru commented on a change in pull request #9: HDDS-2240. Command line 
tool for OM Admin
URL: https://github.com/apache/hadoop-ozone/pull/9#discussion_r338297745
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -223,6 +226,20 @@ public RpcClient(Configuration conf, String omServiceId) 
throws IOException {
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_DEFAULT);
   }
 
+  @Override
+  public List getOmRoleInfos() throws IOException {
 
 Review comment:
   Thank you @bharatviswa504 . Fixed it by calling getServiceList() api when 
getOmRoles is called.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Matt Foley
Definitely yes on ‘ozone-issues’.  Whether we want to keep ozone-dev and 
hdfs-dev together or separate, I’m neutral.
Thanks,
—Matt

On Oct 23, 2019, at 2:11 PM, Elek, Marton  wrote:

Thanks to report this problem Rohith,

Yes, it seems to be configured with the wrong mailing list.

I think the right fix is to create ozone-dev@ and ozone-issues@ and use them 
instead of hdfs-(dev/issues).

Is there any objections against creating new ozone-* mailing lists?

Thanks,
Marton


On 10/21/19 6:03 AM, Rohith Sharma K S wrote:
> + common/yarn and mapreduce/submarine
> Looks like same issue in submarine repository also !
> On Mon, 21 Oct 2019 at 09:30, Rohith Sharma K S 
> wrote:
>> Folks,
>> 
>> In Hadoop world, any mailing list has its own purposes as below
>> 1. hdfs/common/yarn/mapreduce-*dev *mailing list is meant for developer
>> discussion purpose.
>> 2. hdfs/common/yarn/mapreduce*-issues* mailing list used for comments
>> made in the issues.
>> 
>>  It appears Hadoop-Ozone repository configured *hdfs-dev *mailing list
>> for *hdfs-issues* list also. As a result hdfs-dev mailing list is
>> bombarded with every comment made in hadoop-ozone repository.
>> 
>> 
>> Could it be fixed?
>> 
>> -Rohith Sharma K S
>> 
>> 
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [Discuss] Hadoop-Ozone repository mailing list configurations

2019-10-23 Thread Elek, Marton

Thanks to report this problem Rohith,

Yes, it seems to be configured with the wrong mailing list.

I think the right fix is to create ozone-dev@ and ozone-issues@ and use 
them instead of hdfs-(dev/issues).


Is there any objections against creating new ozone-* mailing lists?

Thanks,
Marton


On 10/21/19 6:03 AM, Rohith Sharma K S wrote:

+ common/yarn and mapreduce/submarine

Looks like same issue in submarine repository also !


On Mon, 21 Oct 2019 at 09:30, Rohith Sharma K S 
wrote:


Folks,

In Hadoop world, any mailing list has its own purposes as below
1. hdfs/common/yarn/mapreduce-*dev *mailing list is meant for developer
discussion purpose.
2. hdfs/common/yarn/mapreduce*-issues* mailing list used for comments
made in the issues.

  It appears Hadoop-Ozone repository configured *hdfs-dev *mailing list
for *hdfs-issues* list also. As a result hdfs-dev mailing list is
bombarded with every comment made in hadoop-ozone repository.


Could it be fixed?

-Rohith Sharma K S







-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer merged pull request #75: HDDS-2349 QueryNode does not respect null values for opState or state

2019-10-23 Thread GitBox
anuengineer merged pull request #75: HDDS-2349 QueryNode does not respect null 
values for opState or state
URL: https://github.com/apache/hadoop-ozone/pull/75
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #76: HDDS-2352. Client gets internal error instead of volume not found in secure cluster

2019-10-23 Thread GitBox
adoroszlai opened a new pull request #76: HDDS-2352. Client gets internal error 
instead of volume not found in secure cluster
URL: https://github.com/apache/hadoop-ozone/pull/76
 
 
   ## What changes were proposed in this pull request?
   
   Let `checkAccess` propagate original `OMException` with `VOLUME_NOT_FOUND` 
result code instead of new one with `INTERNAL_ERROR`.  This is similar to 
[existing 
logic](https://github.com/apache/hadoop-ozone/blob/59d078605fd54b320412f4882af9f90ffaea9456/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java#L578-L585)
 for buckets.
   
   https://issues.apache.org/jira/browse/HDDS-2352
   
   ## How was this patch tested?
   
   Tested original steps to reproduce:
   
   ```
   $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure
   $ docker-compose exec scm bash
   $ kinit -kt /etc/security/keytabs/testuser.keytab testuser/s...@example.com
   $ ozone freon ockg -n 1 -t 1
   ...
   2019-10-23 18:52:25,424 [main] INFO   - Creating Volume: vol1, with 
testuser/s...@example.com as owner.
   2019-10-23 18:52:25,542 [main] INFO   - Creating Bucket: vol1/bucket1, 
with Versioning false and Storage Type set to DISK and Encryption set to false
   ...
   Successful executions: 1
   ```
   
   Also ran `ozonesecure` acceptance test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-648) hadoop-hdds and its sub modules have undefined hadoop component

2019-10-23 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-648.

Resolution: Invalid

> hadoop-hdds and its sub modules have undefined hadoop component
> ---
>
> Key: HDDS-648
> URL: https://issues.apache.org/jira/browse/HDDS-648
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Similar to HDDS-409, hadoop-hdds and its submodule have undefined hadoop 
> component folder:
> When building the package, it creates an UNDEF hadoop component in the share 
> folder:
>  * 
> ./hadoop-hdds/sub-module/target/sub-module-X.Y.Z-SNAPSHOT/share/hadoop/UNDEF/lib



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338222375
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmVolumeArgs.java
 ##
 @@ -29,7 +29,7 @@
 import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 /**
- *
+ * Class used to Test OmVolumeArgs.
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2352) Client gets internal error instead of volume not found in secure cluster

2019-10-23 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2352:
--

 Summary: Client gets internal error instead of volume not found in 
secure cluster
 Key: HDDS-2352
 URL: https://issues.apache.org/jira/browse/HDDS-2352
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


New Freon generators create volume and bucket if necessary.  This does not work 
in secure cluster for volume, but works for bucket:

{code}
$ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozonesecure
$ docker-compose exec scm bash
$ kinit -kt /etc/security/keytabs/testuser.keytab testuser/s...@example.com
$ ozone freon ockg -n 1
...
Check access operation failed for volume:vol1
...
Successful executions: 0
$ ozone sh volume create vol1
$ ozone freon ockg -n 1
...
2019-10-23 18:30:27,279 [main] INFO   - Creating Bucket: vol1/bucket1, with 
Versioning false and Storage Type set to DISK and Encryption set to false
...
Successful executions: 1
{code}

The problem is that {{VOLUME_NOT_FOUND}} result is lost during ACL check, and 
client gets {{INTERNAL_ERROR}} instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14927) RBF: Add metrics for active RPC client threads

2019-10-23 Thread Leon Gao (Jira)
Leon Gao created HDFS-14927:
---

 Summary: RBF: Add metrics for active RPC client threads
 Key: HDFS-14927
 URL: https://issues.apache.org/jira/browse/HDFS-14927
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Leon Gao
Assignee: Leon Gao


It is good to add some monitoring on the active RPC client threads, so we know 
the utilization and when to bump up `dfs.federation.router.client.thread-size`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14926) RBF: Add metrics for active RPC client threads

2019-10-23 Thread Leon Gao (Jira)
Leon Gao created HDFS-14926:
---

 Summary: RBF: Add metrics for active RPC client threads
 Key: HDFS-14926
 URL: https://issues.apache.org/jira/browse/HDFS-14926
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Leon Gao
Assignee: Leon Gao


It is good to have some monitoring on the # of active client threads, so we 
know when to bump up dfs.federation.router.client.thread-size



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread Jonathan Hung
Hi Eric, thanks for trying it out. We talked about this in today's YARN
community sync up, summarizing here for everyone else:

I don't think it's worth delaying the 2.10.0 release further, we can
address this in a subsequent 2.10.x release. Wangda mentioned it might be
related to changes in dominant resource calculator, but root cause remains
to be seen.

Jonathan Hung


On Wed, Oct 23, 2019 at 9:02 AM epa...@apache.org  wrote:

> Hi Jonathan,
>
> Thanks very much for all of your work on this release.
>
> I have a concern about cross-queue (inter-queue) preemption in 2.10.
>
> In 2.8, on a 6 node pseudo-cluster, preempting from one queue to meet the
> needs of another queue seems to work as expected. However, 2.10 in the same
> pseudo-cluster (with the same config properties), only one container was
> preempted for the AM and then nothing else.
>
> I don't know how the community feels about holding up the 2.10.0 release
> for this issue, but we need to get to the bottom of this before we can go
> to 2.10.x. I am still investigating.
>
> Thanks,
> -Eric
>
>
>
>
>  On Tuesday, October 22, 2019, 4:55:29 PM CDT, Jonathan Hung <
> jyhung2...@gmail.com> wrote:
> > Hi folks,
> >
> > This is the second release candidate for the first release of Apache
> Hadoop
> > 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> > features such as:
> >
> > - User-defined resource types
> > - Native GPU support as a schedulable resource type
> > - Consistent reads from standby node
> > - Namenode port based selective encryption
> > - Improvements related to rolling upgrade support from 2.x to 3.x
> > - Cost based fair call queue
> >
> > The RC1 artifacts are at:
> http://home.apache.org/~jhung/hadoop-2.10.0-RC1/
> >
> > RC tag is release-2.10.0-RC1.
> >
> > The maven artifacts are hosted here:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1243/
> >
> > My public key is available here:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm
> PDT.
> >
> > Thanks,
> > Jonathan Hung
>


[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #72: HDDS-2341. Validate tar entry path during extraction

2019-10-23 Thread GitBox
adoroszlai commented on a change in pull request #72: HDDS-2341. Validate tar 
entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#discussion_r338166152
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
 ##
 @@ -188,13 +184,139 @@ public void pack() throws IOException, 
CompressorException {
 assertExampleChunkFileIsGood(
 Paths.get(destinationContainerData.getChunksPath()));
 Assert.assertFalse(
-"Descriptor file should not been exctarcted by the "
+"Descriptor file should not have been extracted by the "
 + "unpackContainerData Call",
 destinationContainer.getContainerFile().exists());
 Assert.assertEquals(TEST_DESCRIPTOR_FILE_CONTENT, descriptor);
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeDbFilePath()
+  throws Exception {
+//GIVEN
+KeyValueContainerData sourceContainerData =
+createContainer(SOURCE_CONTAINER_ROOT);
+
+String fileName = "sub/dir/" + TEST_DB_FILE_NAME;
+File file = writeDbFile(sourceContainerData, fileName);
+String entryName = TarContainerPacker.DB_DIR_NAME + "/" + fileName;
+
+File containerFile = packContainerWithSingleFile(file, entryName);
+
+// WHEN
+unpackContainerData(containerFile);
+
+// THEN
+assertExampleMetadataDbIsGood(file.toPath().getParent());
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeChunkFilePath()
+  throws Exception {
+//GIVEN
+KeyValueContainerData sourceContainerData =
+createContainer(SOURCE_CONTAINER_ROOT);
+
+String fileName = "sub/dir/" + TEST_CHUNK_FILE_NAME;
+File file = writeChunkFile(sourceContainerData, fileName);
+String entryName = TarContainerPacker.CHUNKS_DIR_NAME + "/" + fileName;
+
+File containerFile = packContainerWithSingleFile(file, entryName);
+
+// WHEN
+unpackContainerData(containerFile);
+
+// THEN
+assertExampleChunkFileIsGood(file.toPath().getParent());
 
 Review comment:
   Good point.
   
   I don't recall exactly why I added this assertion.  I can say that the main 
goal of the valid/invalid test cases is to verify that they are 
accepted/rejected by `unpackContainerData` (by way of not throwing or throwing 
exception).  The subsequent assertion is not really necessary, as the 
pack/unpack process is validated in `@Test pack()`.
   
   Now that you pointed this out, I tweaked the test a bit to be able to verify 
the chunk/db file for these cases, too.  Also added post-test cleanup.
   
   Thanks for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-10-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1298/

[Oct 22, 2019 1:04:02 PM] (ayushsaxena) HDFS-14918. Remove useless 
getRedundancyThread from
[Oct 22, 2019 1:14:22 PM] (ayushsaxena) HDFS-14915. Move Superuser Check Before 
Taking Lock For Encryption API.
[Oct 22, 2019 8:31:15 PM] (weichiu) HDFS-14884. Add sanity check that zone key 
equals feinfo key while




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

FindBugs :

   module:hadoop-ozone/csi 
   Useless control flow in 
csi.v1.Csi$CapacityRange$Builder.maybeForceBuilderInitialization() At Csi.java: 
At Csi.java:[line 15977] 
   Class csi.v1.Csi$ControllerExpandVolumeRequest defines non-transient 
non-serializable instance field secrets_ In Csi.java:instance field secrets_ In 
Csi.java 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 50408] 
   Useless control flow in 
csi.v1.Csi$ControllerExpandVolumeResponse$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 51319] 
   Useless control flow in 
csi.v1.Csi$ControllerGetCapabilitiesRequest$Builder.maybeForceBuilderInitialization()
 At Csi.java: At Csi.java:[line 39596] 
   Class csi.v1.Csi$ControllerPublishVolumeRequest defines non-transient 
non-serializable instance field 

[GitHub] [hadoop-ozone] dineshchitlangia commented on issue #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
dineshchitlangia commented on issue #71: HDDS-2344. Add immutable entries in to 
the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#issuecomment-545522617
 
 
   Other test failures seem unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-23 Thread epa...@apache.org
Hi Jonathan,

Thanks very much for all of your work on this release.

I have a concern about cross-queue (inter-queue) preemption in 2.10.

In 2.8, on a 6 node pseudo-cluster, preempting from one queue to meet the needs 
of another queue seems to work as expected. However, 2.10 in the same 
pseudo-cluster (with the same config properties), only one container was 
preempted for the AM and then nothing else.

I don't know how the community feels about holding up the 2.10.0 release for 
this issue, but we need to get to the bottom of this before we can go to 
2.10.x. I am still investigating.

Thanks,
-Eric




 On Tuesday, October 22, 2019, 4:55:29 PM CDT, Jonathan Hung 
 wrote: 
> Hi folks,
> 
> This is the second release candidate for the first release of Apache Hadoop
> 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> features such as:
> 
> - User-defined resource types
> - Native GPU support as a schedulable resource type
> - Consistent reads from standby node
> - Namenode port based selective encryption
> - Improvements related to rolling upgrade support from 2.x to 3.x
> - Cost based fair call queue
> 
> The RC1 artifacts are at: http://home.apache.org/~jhung/hadoop-2.10.0-RC1/
> 
> RC tag is release-2.10.0-RC1.
> 
> The maven artifacts are hosted here:
> https://repository.apache.org/content/repositories/orgapachehadoop-1243/
> 
> My public key is available here:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm PDT.
> 
> Thanks,
> Jonathan Hung

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14925) rename operation should check nest snapshot

2019-10-23 Thread Junwang Zhao (Jira)
Junwang Zhao created HDFS-14925:
---

 Summary: rename operation should check nest snapshot
 Key: HDFS-14925
 URL: https://issues.apache.org/jira/browse/HDFS-14925
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Junwang Zhao


When we do rename operation, If the src directory or any of its descendant

is snapshottable and the dst directory or any of its ancestors is 
snapshottable, 

we consider this as nested snapshot, which should be denied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #74: HDDS-2348.Remove log4j properties for package org.apache.hadoop.ozone

2019-10-23 Thread GitBox
elek commented on issue #74: HDDS-2348.Remove log4j properties for package 
org.apache.hadoop.ozone
URL: https://github.com/apache/hadoop-ozone/pull/74#issuecomment-545472098
 
 
   Thank you very much @chimney-lee to file this jira. The log4j is inherited 
from the original hadoop log4j (where we created separated logger for ozone). I 
agree that it could be simplified (for example `OZONE` and `console` appenders 
are very similar).
   
   But it's not clear what's the problem with logging everything to the file?
   
   > Remove log4j config for package org.apache.hadoop.ozone, as it cause the 
log in this package cannot be written to .log file
   
   Can you please explain why can't the log be written to the log file? (which 
log file?)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #72: HDDS-2341. Validate tar entry path during extraction

2019-10-23 Thread GitBox
nandakumar131 commented on a change in pull request #72: HDDS-2341. Validate 
tar entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#discussion_r338072207
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
 ##
 @@ -188,13 +184,139 @@ public void pack() throws IOException, 
CompressorException {
 assertExampleChunkFileIsGood(
 Paths.get(destinationContainerData.getChunksPath()));
 Assert.assertFalse(
-"Descriptor file should not been exctarcted by the "
+"Descriptor file should not have been extracted by the "
 + "unpackContainerData Call",
 destinationContainer.getContainerFile().exists());
 Assert.assertEquals(TEST_DESCRIPTOR_FILE_CONTENT, descriptor);
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeDbFilePath()
+  throws Exception {
+//GIVEN
+KeyValueContainerData sourceContainerData =
+createContainer(SOURCE_CONTAINER_ROOT);
+
+String fileName = "sub/dir/" + TEST_DB_FILE_NAME;
+File file = writeDbFile(sourceContainerData, fileName);
+String entryName = TarContainerPacker.DB_DIR_NAME + "/" + fileName;
+
+File containerFile = packContainerWithSingleFile(file, entryName);
+
+// WHEN
+unpackContainerData(containerFile);
+
+// THEN
+assertExampleMetadataDbIsGood(file.toPath().getParent());
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeChunkFilePath()
+  throws Exception {
+//GIVEN
+KeyValueContainerData sourceContainerData =
+createContainer(SOURCE_CONTAINER_ROOT);
+
+String fileName = "sub/dir/" + TEST_CHUNK_FILE_NAME;
+File file = writeChunkFile(sourceContainerData, fileName);
+String entryName = TarContainerPacker.CHUNKS_DIR_NAME + "/" + fileName;
+
+File containerFile = packContainerWithSingleFile(file, entryName);
+
+// WHEN
+unpackContainerData(containerFile);
+
+// THEN
+assertExampleChunkFileIsGood(file.toPath().getParent());
 
 Review comment:
   The assertion here is done on source container itself, is this intentional?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #72: HDDS-2341. Validate tar entry path during extraction

2019-10-23 Thread GitBox
nandakumar131 commented on a change in pull request #72: HDDS-2341. Validate 
tar entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#discussion_r338072825
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
 ##
 @@ -188,13 +184,139 @@ public void pack() throws IOException, 
CompressorException {
 assertExampleChunkFileIsGood(
 Paths.get(destinationContainerData.getChunksPath()));
 Assert.assertFalse(
-"Descriptor file should not been exctarcted by the "
+"Descriptor file should not have been extracted by the "
 + "unpackContainerData Call",
 destinationContainer.getContainerFile().exists());
 Assert.assertEquals(TEST_DESCRIPTOR_FILE_CONTENT, descriptor);
+  }
+
+  @Test
+  public void unpackContainerDataWithValidRelativeDbFilePath()
+  throws Exception {
+//GIVEN
+KeyValueContainerData sourceContainerData =
+createContainer(SOURCE_CONTAINER_ROOT);
+
+String fileName = "sub/dir/" + TEST_DB_FILE_NAME;
+File file = writeDbFile(sourceContainerData, fileName);
+String entryName = TarContainerPacker.DB_DIR_NAME + "/" + fileName;
+
+File containerFile = packContainerWithSingleFile(file, entryName);
+
+// WHEN
+unpackContainerData(containerFile);
+
+// THEN
+assertExampleMetadataDbIsGood(file.toPath().getParent());
 
 Review comment:
   The assertion here is done on source container itself, is this intentional?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
dineshchitlangia commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338056880
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmVolumeArgs.java
 ##
 @@ -29,7 +29,7 @@
 import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 /**
- *
+ * Class used to Test OmVolumeArgs.
 
 Review comment:
   NIT: `Test` -> `test`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
dineshchitlangia commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r338056264
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -298,4 +303,27 @@ public static OmOzoneAclMap ozoneAclGetFromProtobuf(
   public List getDefaultAclList() {
 return defaultAclList;
   }
+
+  @Override
+  public Object clone() {
+ArrayList> accessMap = new ArrayList<>();
+
+// Initialize.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  accessMap.add(aclType.ordinal(), new HashMap<>());
+}
+
+// Add from original accessAclMap to accessMap.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  final int ordinal = aclType.ordinal();
+  accessAclMap.get(ordinal).forEach((k, v) ->
+  accessMap.get(ordinal).put(k, (BitSet) v.clone()));
+}
+
+// We can do shallow copy here, as OzoneAclInfo is immutable structure.
+ArrayList defaultList = new ArrayList<>();
+defaultList.addAll(defaultAclList);
+
+return new OmOzoneAclMap(defaultList, accessMap);
+  }
 
 Review comment:
   Given the use case and avoid shallow copy, it is better to make it final.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14924) RenameSnapshot not updating new modification time

2019-10-23 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-14924:


 Summary: RenameSnapshot not updating new modification time
 Key: HDFS-14924
 URL: https://issues.apache.org/jira/browse/HDFS-14924
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina
Assignee: hemanthboyina


RenameSnapshot doesnt updating modification time



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #65: HDDS-2334. Dummy chunk manager fails with length mismatch error

2019-10-23 Thread GitBox
adoroszlai commented on issue #65: HDDS-2334. Dummy chunk manager fails with 
length mismatch error
URL: https://github.com/apache/hadoop-ozone/pull/65#issuecomment-545414887
 
 
   Thanks all for reviews and @mukul1987 for merging it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14923) Remove dead code from HealthMonitor

2019-10-23 Thread Fei Hui (Jira)
Fei Hui created HDFS-14923:
--

 Summary: Remove dead code from HealthMonitor
 Key: HDFS-14923
 URL: https://issues.apache.org/jira/browse/HDFS-14923
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.3, 3.2.1, 3.3.0
Reporter: Fei Hui


Dig ZKFC source code and find that the dead code as follow
{code}
public void removeCallback(Callback cb) {
   callbacks.remove(cb);
}

public synchronized void removeServiceStateCallback(ServiceStateCallback cb) {
   serviceStateCallbacks.remove(cb);
}

synchronized HAServiceStatus getLastServiceStatus() {
   return lastServiceState;
}
{code}
It's useless, and should be deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2287) Move ozone source code to apache/hadoop-ozone from apache/hadoop

2019-10-23 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-2287.
---
Resolution: Fixed

> Move ozone source code to apache/hadoop-ozone from apache/hadoop
> 
>
> Key: HDDS-2287
> URL: https://issues.apache.org/jira/browse/HDDS-2287
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> *This issue is created to use the assigned number for any technical commits 
> to make it easy to follow the root reason of the commit...*
>  
> As discussed and voted on the mailing lists, Apache Hadoop Ozone source code 
> will be removed from the hadoop trunk and stored in a separated repository.
>  
> Original discussion is here:
> [https://lists.apache.org/thread.html/ef01b7def94ba58f746875999e419e10645437423ab9af19b32821e7@%3Chdfs-dev.hadoop.apache.org%3E]
> (It's started as a discussion but as everybody started to vote it's finished 
> with a call to a lazy consensus vote)
>  
> Technical proposal is shared on the wiki: 
> [https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Ozone+source+tree+split]
>  
> Discussed on the community meeting: 
> [https://cwiki.apache.org/confluence/display/HADOOP/2019-09-30+Meeting+notes]
>  
> Which is shared on the mailing list to get more feedback: 
> [https://lists.apache.org/thread.html/ed608c708ea302675ae5e39636ed73613f47a93c2ddfbd3c9e24dbae@%3Chdfs-dev.hadoop.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2350) NullPointerException seen in datanode log while writing data

2019-10-23 Thread Nilotpal Nandi (Jira)
Nilotpal Nandi created HDDS-2350:


 Summary: NullPointerException seen in datanode log while writing 
data
 Key: HDDS-2350
 URL: https://issues.apache.org/jira/browse/HDDS-2350
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi


NullPointerException exception seen in datanode log while writing 10GB data. 
There is one pipelinee with factor 3 while writing data.
{noformat}
2019-10-23 11:25:45,674 ERROR 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics 
from source 
ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2
java.lang.NullPointerException
 at 
org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219)
 at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
 at java.util.TimerThread.mainLoop(Timer.java:555)
 at java.util.TimerThread.run(Timer.java:505)
2019-10-23 11:25:55,673 ERROR 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics 
from source 
ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2
java.lang.NullPointerException
 at 
org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219)
 at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
 at java.util.TimerThread.mainLoop(Timer.java:555)
 at java.util.TimerThread.run(Timer.java:505)
2019-10-23 11:26:05,674 ERROR 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Error getting metrics 
from source 
ratis_core.ratis_leader.a23fb300-4c1e-420f-a21e-7e73d0c22cbe@group-4CA404C938C2
java.lang.NullPointerException
 at 
org.apache.ratis.server.impl.RaftLeaderMetrics.lambda$null$2(RaftLeaderMetrics.java:86)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.snapshotAllMetrics(HadoopMetrics2Reporter.java:239)
 at 
com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.getMetrics(HadoopMetrics2Reporter.java:219)
 at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
 at java.util.TimerThread.mainLoop(Timer.java:555)
 at java.util.TimerThread.run(Timer.java:505){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on issue #65: HDDS-2334. Dummy chunk manager fails with length mismatch error

2019-10-23 Thread GitBox
mukul1987 commented on issue #65: HDDS-2334. Dummy chunk manager fails with 
length mismatch error
URL: https://github.com/apache/hadoop-ozone/pull/65#issuecomment-545395598
 
 
   Thanks for the contribution @adoroszlai and @bshashikant and @supratimdeka 
for reviews. I am +1 on the patch as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 merged pull request #65: HDDS-2334. Dummy chunk manager fails with length mismatch error

2019-10-23 Thread GitBox
mukul1987 merged pull request #65: HDDS-2334. Dummy chunk manager fails with 
length mismatch error
URL: https://github.com/apache/hadoop-ozone/pull/65
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337974659
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerReplicationEndToEnd.java
 ##
 @@ -54,8 +54,7 @@
 import java.util.function.Predicate;
 
 import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
-import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_PIPELINE_DESTROY_TIMEOUT;
-import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.*;
 
 Review comment:
   IDE did this. Fixing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337974011
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
 ##
 @@ -352,6 +353,16 @@ public Builder setNumDatanodes(int val) {
   return this;
 }
 
+/**
+ * Sets the total number of pipelines to create.
+ * @param val number of pipelines
+ * @return MiniOzoneCluster.Builder
+ */
+public Builder setPipelineNumLimit(int val) {
 
 Review comment:
   Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337973757
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
 ##
 @@ -73,8 +75,11 @@
   public static void init() throws Exception {
 conf = new OzoneConfiguration();
 conf.set(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT, "1");
+conf.setInt(OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT, 2);
 
 Review comment:
   Conf setting is universal way of setting the limit. The current way in 
MiniOzoneCluster to set the global pipeline limit is rather a convenient method 
for internal use. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337973180
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -53,8 +53,7 @@
 import java.util.concurrent.TimeoutException;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
-import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_SCM_WATCHER_TIMEOUT;
-import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.*;
 
 Review comment:
   IDE did this. Fixing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337972432
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
 ##
 @@ -494,6 +494,9 @@ void initializeConfiguration() throws IOException {
   streamBufferMaxSize.get(), streamBufferSizeUnit.get());
   conf.setStorageSize(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE, 
blockSize.get(),
   streamBufferSizeUnit.get());
+  // MiniOzoneCluster should have global pipeline upper limit.
+  conf.setInt(ScmConfigKeys.OZONE_SCM_PIPELINE_NUMBER_LIMIT,
+  pipelineNumLimit == 3 ? 2 * numOfDatanodes : pipelineNumLimit);
 
 Review comment:
   3 means the default number pipelineNumLimit, which means there is no use-set 
limit. I should've made this a global reference. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337971308
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/safemode/TestSCMSafeModeWithPipelineRules.java
 ##
 @@ -62,8 +63,11 @@ public void setup(int numDatanodes) throws Exception {
 true);
 conf.set(HddsConfigKeys.HDDS_SCM_WAIT_TIME_AFTER_SAFE_MODE_EXIT, "10s");
 conf.set(ScmConfigKeys.OZONE_SCM_PIPELINE_CREATION_INTERVAL, "10s");
+conf.setInt(OZONE_DATANODE_MAX_PIPELINE_ENGAGEMENT, 1000);
 
 Review comment:
   I can make it smaller.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on issue #23: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-23 Thread GitBox
elek commented on issue #23: HDDS-1868. Ozone pipelines should be marked as 
ready only after the leader election is complete.
URL: https://github.com/apache/hadoop-ozone/pull/23#issuecomment-545351439
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14922) On StartUp , Snapshot modification time got changed

2019-10-23 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-14922:


 Summary: On StartUp , Snapshot modification time got changed
 Key: HDFS-14922
 URL: https://issues.apache.org/jira/browse/HDFS-14922
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina
Assignee: hemanthboyina


Snapshot modification time got changed on namenode restart



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #72: HDDS-2341. Validate tar entry path during extraction

2019-10-23 Thread GitBox
adoroszlai commented on a change in pull request #72: HDDS-2341. Validate tar 
entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#discussion_r337924202
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/TarContainerPacker.java
 ##
 @@ -114,12 +116,11 @@
 }
   }
 
+  @SuppressFBWarnings("NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE")
 
 Review comment:
   Thanks @dineshchitlangia for the review.  I agree about the need for 
explanation, I was just kind of lazy adding it.  In fact, I think it's simpler 
to replace the suppression with a null check. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14921) Remove SuperUser Check in Setting Storage Policy in FileStatus During Listing

2019-10-23 Thread Ayush Saxena (Jira)
Ayush Saxena created HDFS-14921:
---

 Summary: Remove SuperUser Check in Setting Storage Policy in 
FileStatus During Listing
 Key: HDFS-14921
 URL: https://issues.apache.org/jira/browse/HDFS-14921
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Earlier StoragePolicy were part of DFSAdmin and operations of StoragePolicy 
required SuperUser Check, But that got removed long back, But the Check in 
getListing was left.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel opened a new pull request #75: HDDS-2349 QueryNode does not respect null values for opState or state

2019-10-23 Thread GitBox
sodonnel opened a new pull request #75: HDDS-2349 QueryNode does not respect 
null values for opState or state
URL: https://github.com/apache/hadoop-ozone/pull/75
 
 
   In HDDS-2197, the queryNode API call was changed to allow operational state 
(in_service, decommissioning etc) to be passed along with the node health 
state. This changed allowed for a null state to indicate a wildcard, so passing:
   
   opState = null
   healthState = HEALTHY
   
   Allows one to find all the healthy nodes, irrespective of their opState.
   
   However, for an enum protobuf field, if no value is specified, the first 
enum in the set is returned as the default. This means that when a null is 
passed for opState, only the IN_SERVICE nodes are returned. Similar for health 
state - passing a null will return only HEALTHY nodes.
   
   This PR will fix this issue so the null value acts as a wildcard as intended.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2349) QueryNode does not respect null values for opState or state

2019-10-23 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2349:
---

 Summary: QueryNode does not respect null values for opState or 
state
 Key: HDDS-2349
 URL: https://issues.apache.org/jira/browse/HDDS-2349
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


In HDDS-2197, the queryNode API call was changed to allow operational state 
(in_service, decommissioning etc) to be passed along with the node health 
state. This changed allowed for a null state to indicate a wildcard, so passing:

opState = null
healthState = HEALTHY

Allows one to find all the healthy nodes, irrespective of their opState.

However, for an enum protobuf field, if no value is specified, the first enum 
in the set is returned as the default. This means that when a null is passed 
for opState, only the IN_SERVICE nodes are returned. Similar for health state - 
passing a null will return only HEALTHY nodes.

This PR will fix this issue so the null value acts as a wildcard as intended.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337899055
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
 ##
 @@ -253,10 +256,8 @@ public void testPipelineCreationFailedMetric() throws 
Exception {
   pipelineManager.createPipeline(HddsProtos.ReplicationType.RATIS,
   HddsProtos.ReplicationFactor.THREE);
   Assert.fail();
-} catch (InsufficientDatanodesException idEx) {
-  Assert.assertEquals(
-  "Cannot create pipeline of factor 3 using 1 nodes.",
-  idEx.getMessage());
+} catch (SCMException idEx) {
+  // pipeline creation failed this time.
 
 Review comment:
   It doesn't have one before, while I can add some check here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337898025
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
 ##
 @@ -103,7 +108,9 @@ public void testPipelineCreationOnNodeRestart() throws 
Exception {
 } catch (IOException ioe) {
   // As now all datanodes are shutdown, they move to stale state, there
   // will be no sufficient datanodes to create the pipeline.
-  Assert.assertTrue(ioe instanceof InsufficientDatanodesException);
+  Assert.assertTrue(ioe instanceof SCMException);
+  Assert.assertTrue(((SCMException) ioe).getResult()
 
 Review comment:
   Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337896461
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,45 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (factor != ReplicationFactor.THREE) {
+  // Only put limits for Factor THREE pipelines.
+  return false;
+}
+// Per datanode limit
+if (heavyNodeCriteria > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) > heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY) /
+  factor.getNumber();
+}
+
+// Global limit
+if (pipelineNumberLimit > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS,
+  ReplicationFactor.THREE).size() - stateManager.getPipelines(
+  ReplicationType.RATIS, ReplicationFactor.THREE,
+  Pipeline.PipelineState.CLOSED).size()) >
+  (pipelineNumberLimit - stateManager.getPipelines(
+  ReplicationType.RATIS, ReplicationFactor.ONE).size());
+}
+
+return false;
+  }
+
   @Override
   public synchronized Pipeline createPipeline(
   ReplicationType type, ReplicationFactor factor) throws IOException {
 lock.writeLock().lock();
+if (type == ReplicationType.RATIS && exceedPipelineNumberLimit(factor)) {
 
 Review comment:
   Good point. Updating.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337868451
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -147,10 +152,45 @@ private void initializePipelineState() throws 
IOException {
 }
   }
 
+  private boolean exceedPipelineNumberLimit(ReplicationFactor factor) {
+if (factor != ReplicationFactor.THREE) {
+  // Only put limits for Factor THREE pipelines.
+  return false;
+}
+// Per datanode limit
+if (heavyNodeCriteria > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS, factor).size() -
+  stateManager.getPipelines(ReplicationType.RATIS, factor,
+  Pipeline.PipelineState.CLOSED).size()) > heavyNodeCriteria *
+  nodeManager.getNodeCount(HddsProtos.NodeState.HEALTHY) /
+  factor.getNumber();
+}
+
+// Global limit
+if (pipelineNumberLimit > 0) {
+  return (stateManager.getPipelines(ReplicationType.RATIS,
+  ReplicationFactor.THREE).size() - stateManager.getPipelines(
+  ReplicationType.RATIS, ReplicationFactor.THREE,
+  Pipeline.PipelineState.CLOSED).size()) >
+  (pipelineNumberLimit - stateManager.getPipelines(
 
 Review comment:
   Yes. The RATIS ONE pipeline is still using old fashion of picking up nodes. 
Only RATIS THREE pipeline is using the complex new fashion in 
PipelinePlacementPolicy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337867801
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -115,6 +114,12 @@ public SCMPipelineManager(Configuration conf, NodeManager 
nodeManager,
 "SCMPipelineManagerInfo", this);
 initializePipelineState();
 this.grpcTlsConfig = grpcTlsConfig;
+this.pipelineNumberLimit = conf.getInt(
+ScmConfigKeys.OZONE_SCM_PIPELINE_NUMBER_LIMIT,
+ScmConfigKeys.OZONE_SCM_PIPELINE_NUMBER_LIMIT_DEFAULT);
+this.heavyNodeCriteria = conf.getInt(
 
 Review comment:
   Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337867554
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -846,10 +846,17 @@
 
   
   
-ozone.scm.datanode.max.pipeline.engagement
-5
+  ozone.scm.datanode.max.pipeline.engagement
+  0
+  OZONE, SCM, PIPELINE
+  Max number of pipelines per datanode can be engaged in.
 
 Review comment:
   Sure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
timmylicheng commented on a change in pull request #28: HDDS-1569 Support 
creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337866903
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2PipelineMap.java
 ##
 @@ -71,6 +71,10 @@ public synchronized void addPipeline(Pipeline pipeline) {
   UUID dnId = details.getUuid();
   dn2ObjectMap.computeIfAbsent(dnId, k -> ConcurrentHashMap.newKeySet())
   .add(pipeline.getId());
+  dn2ObjectMap.computeIfPresent(dnId, (k, v) -> {
+v.add(pipeline.getId());
 
 Review comment:
   Line 73 is adding v to new keyset when key is absent and line 75 is adding v 
to the existing keyset. Reason why we don't have to have extra if here is 
computeIfAbsent and computeIfPresent are both judging if key is present or not. 
So we don't have to see if Map.get(k) is null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-23 Thread GitBox
xiaoyuyao commented on a change in pull request #28: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop-ozone/pull/28#discussion_r337865795
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java
 ##
 @@ -352,6 +353,16 @@ public Builder setNumDatanodes(int val) {
   return this;
 }
 
+/**
+ * Sets the total number of pipelines to create.
+ * @param val number of pipelines
+ * @return MiniOzoneCluster.Builder
+ */
+public Builder setPipelineNumLimit(int val) {
 
 Review comment:
   NIT: rename to setTotalPipelineNumLimit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r337861523
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -298,4 +303,27 @@ public static OmOzoneAclMap ozoneAclGetFromProtobuf(
   public List getDefaultAclList() {
 return defaultAclList;
   }
+
+  @Override
+  public Object clone() {
+ArrayList> accessMap = new ArrayList<>();
+
+// Initialize.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  accessMap.add(aclType.ordinal(), new HashMap<>());
+}
+
+// Add from original accessAclMap to accessMap.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  final int ordinal = aclType.ordinal();
+  accessAclMap.get(ordinal).forEach((k, v) ->
+  accessMap.get(ordinal).put(k, (BitSet) v.clone()));
+}
+
+// We can do shallow copy here, as OzoneAclInfo is immutable structure.
+ArrayList defaultList = new ArrayList<>();
+defaultList.addAll(defaultAclList);
+
+return new OmOzoneAclMap(defaultList, accessMap);
+  }
 
 Review comment:
   I thought it is the final class and did like that. I think this should be 
marked as final like other classes like OMVolumeArgs. (As these classes 
represent proto structure data). And also as for few parameters, I need a deep 
copy, so I changed like this(And also I feel, if I use super.clone it will make 
shallow copy, and then I need to change few fields, so I think it will be 
double work in cloning). Let me know what do you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r337861558
 
 

 ##
 File path: 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmVolumeArgs.java
 ##
 @@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.helpers;
+
+import java.util.Collections;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Time;
+import org.junit.Assert;
+import org.junit.Test;
+
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+
+/**
+ *
+ */
+public class TestOmVolumeArgs {
 
 Review comment:
   Updated it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-23 Thread GitBox
bharatviswa504 commented on a change in pull request #71: HDDS-2344. Add 
immutable entries in to the DoubleBuffer for Volume requests.
URL: https://github.com/apache/hadoop-ozone/pull/71#discussion_r337861523
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -298,4 +303,27 @@ public static OmOzoneAclMap ozoneAclGetFromProtobuf(
   public List getDefaultAclList() {
 return defaultAclList;
   }
+
+  @Override
+  public Object clone() {
+ArrayList> accessMap = new ArrayList<>();
+
+// Initialize.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  accessMap.add(aclType.ordinal(), new HashMap<>());
+}
+
+// Add from original accessAclMap to accessMap.
+for (OzoneAclType aclType : OzoneAclType.values()) {
+  final int ordinal = aclType.ordinal();
+  accessAclMap.get(ordinal).forEach((k, v) ->
+  accessMap.get(ordinal).put(k, (BitSet) v.clone()));
+}
+
+// We can do shallow copy here, as OzoneAclInfo is immutable structure.
+ArrayList defaultList = new ArrayList<>();
+defaultList.addAll(defaultAclList);
+
+return new OmOzoneAclMap(defaultList, accessMap);
+  }
 
 Review comment:
   I thought it is the final class and did like that. I think this should be 
marked as final like other classes like OMVolumeArgs. (As these classes 
represent proto structure data). And also as for few parameters, I need a deep 
copy, so I changed like this. Let me know what do you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org