[jira] [Updated] (HADOOP-16390) Build fails due to bad use of '>' in javadoc

2019-06-24 Thread Kei Kori (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kei Kori updated HADOOP-16390:
--
Attachment: HADOOP-16390.001.patch
Status: Patch Available  (was: Open)

> Build fails due to bad use of '>' in javadoc
> 
>
> Key: HADOOP-16390
> URL: https://issues.apache.org/jira/browse/HADOOP-16390
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kei Kori
>Assignee: Kei Kori
>Priority: Critical
> Attachments: HADOOP-16390.001.patch
>
>
> Building trunk with JDK8 fails with following errors:
> {code:java}
> [ERROR] 
> /hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:966:
>  error: bad use of '>'
> [ERROR]* Get a integer option >= the minimum allowed value.
> [ERROR]   ^
> [ERROR] 
> /hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:984:
>  error: bad use of '>'
> [ERROR]* Get a long option >= the minimum allowed value.
> [ERROR]^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16390) Build fails due to bad use of '>' in javadoc

2019-06-24 Thread Kei Kori (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872014#comment-16872014
 ] 

Kei Kori commented on HADOOP-16390:
---

HADOOP-15183 made intOption/longOption methods public and tried to generate 
their javadocs.

> Build fails due to bad use of '>' in javadoc
> 
>
> Key: HADOOP-16390
> URL: https://issues.apache.org/jira/browse/HADOOP-16390
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kei Kori
>Assignee: Kei Kori
>Priority: Critical
>
> Building trunk with JDK8 fails with following errors:
> {code:java}
> [ERROR] 
> /hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:966:
>  error: bad use of '>'
> [ERROR]* Get a integer option >= the minimum allowed value.
> [ERROR]   ^
> [ERROR] 
> /hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:984:
>  error: bad use of '>'
> [ERROR]* Get a long option >= the minimum allowed value.
> [ERROR]^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16390) Build fails due to bad use of '>' in javadoc

2019-06-24 Thread Kei Kori (JIRA)
Kei Kori created HADOOP-16390:
-

 Summary: Build fails due to bad use of '>' in javadoc
 Key: HADOOP-16390
 URL: https://issues.apache.org/jira/browse/HADOOP-16390
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Kei Kori
Assignee: Kei Kori


Building trunk with JDK8 fails with following errors:
{code:java}
[ERROR] 
/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:966:
 error: bad use of '>'
[ERROR]* Get a integer option >= the minimum allowed value.
[ERROR]   ^
[ERROR] 
/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java:984:
 error: bad use of '>'
[ERROR]* Get a long option >= the minimum allowed value.
[ERROR]^
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #880: HDDS-1617. Restructure the code layout for Ozone Manager

2019-06-24 Thread GitBox
anuengineer closed pull request #880: HDDS-1617. Restructure the code layout 
for Ozone Manager
URL: https://github.com/apache/hadoop/pull/880
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2019-06-24 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871998#comment-16871998
 ] 

Akira Ajisaka commented on HADOOP-13386:


Now I'm +1 for Avro 1.9.0 in Hadoop 3.3.x.

Avro 1.9.0 removed the dependency of Jackson 1.x. Really nice.

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Assignee: Kalman
>Priority: Major
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16389) Bump Apache Avro to 1.9.0

2019-06-24 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16389:
---
Fix Version/s: (was: HADOOP-13386)

> Bump Apache Avro to 1.9.0
> -
>
> Key: HADOOP-16389
> URL: https://issues.apache.org/jira/browse/HADOOP-16389
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Fokko Driesprong
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505280177
 
 
   Thank You @anuengineer for the review.
   I have fixed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297003720
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297003664
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[jira] [Updated] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-16350:

Attachment: HADOOP-16350-branch-2.8.01.patch

> Ability to tell HDFS client not to request KMS Information from NameNode
> 
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350-branch-2.8.01.patch, 
> HADOOP-16350-branch-2.8.patch, HADOOP-16350.00.patch, HADOOP-16350.01.patch, 
> HADOOP-16350.02.patch, HADOOP-16350.03.patch, HADOOP-16350.04.patch, 
> HADOOP-16350.05.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297000862
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297000862
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297000629
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r297000650
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on 

[GitHub] [hadoop] openinx commented on issue #977: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread GitBox
openinx commented on issue #977: HDFS-14541. When evictableMmapped or evictable 
size is zero, do not throw NoSuchElementException
URL: https://github.com/apache/hadoop/pull/977#issuecomment-505275532
 
 
   I saw you have done the cherry-pick, Thanks @goiri for your work :-) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1010: [HDFS-13694]Making md5 computing being in parallel with image loading

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1010: [HDFS-13694]Making md5 computing being 
in parallel with image loading
URL: https://github.com/apache/hadoop/pull/1010#issuecomment-505275373
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1179 | trunk passed |
   | +1 | compile | 63 | trunk passed |
   | +1 | checkstyle | 45 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 811 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 50 | trunk passed |
   | 0 | spotbugs | 176 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 174 | hadoop-hdfs-project/hadoop-hdfs in trunk has 1 
extant findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 64 | the patch passed |
   | +1 | compile | 60 | the patch passed |
   | +1 | javac | 60 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 28 unchanged - 0 fixed = 29 total (was 28) |
   | +1 | mvnsite | 71 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 765 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 49 | the patch passed |
   | -1 | findbugs | 186 | hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | -1 | unit | 5199 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 8978 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Should 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$DigestThread
 be a _static_ inner class?  At FSImageFormatProtobuf.java:inner class?  At 
FSImageFormatProtobuf.java:[lines 176-203] |
   | Failed junit tests | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1010 |
   | JIRA Issue | HDFS-13694 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7727bfa4491b 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b76b843 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/testReport/ |
   | Max. process+thread count | 3517 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1010/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] leosunli opened a new pull request #1011: [HDFS-14313]Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memor…

2019-06-24 Thread GitBox
leosunli opened a new pull request #1011: [HDFS-14313]Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memor…
URL: https://github.com/apache/hadoop/pull/1011
 
 
   …y instead of df/du
   
   Signed-off-by: sunlisheng 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296984019
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on hdfs, 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296984611
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on hdfs, 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296984587
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on hdfs, 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296986201
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on hdfs, 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296983712
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if (value.isLevelLocked(lockSetVal)) {
+currentLocks.add(value.getName());
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  // When acquiring multiple user locks, the reason for doing lexical
+  // order comparision is to avoid deadlock scenario.
+
+  // Example: 1st thread acquire lock(ozone, hdfs)
+  // 2nd thread acquire lock(hdfs, ozone).
+  // If we don't acquire user locks in an order, there can be a deadlock.
+
+  // 1st thread acquired lock on ozone, waiting for lock on hdfs, 2nd
+  // thread acquired lock on hdfs, 

[GitHub] [hadoop] bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505250084
 
 
   Test failures are not related to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] leosunli opened a new pull request #1010: [HDFS-13694]Making md5 computing being in parallel with image loading

2019-06-24 Thread GitBox
leosunli opened a new pull request #1010: [HDFS-13694]Making md5 computing 
being in parallel with image loading
URL: https://github.com/apache/hadoop/pull/1010
 
 
   Signed-off-by: sunlisheng 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #977: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread GitBox
goiri commented on issue #977: HDFS-14541. When evictableMmapped or evictable 
size is zero, do not throw NoSuchElementException
URL: https://github.com/apache/hadoop/pull/977#issuecomment-505242815
 
 
   Does github allow cherry-picking? git cherry-pick -x requires -m which is 
not clear to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] openinx commented on issue #977: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread GitBox
openinx commented on issue #977: HDFS-14541. When evictableMmapped or evictable 
size is zero, do not throw NoSuchElementException
URL: https://github.com/apache/hadoop/pull/977#issuecomment-505240745
 
 
   @goiri ,  I think we need to cherry-pick this feature to all branches, such 
as branch-2.9 etc. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' 
query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-505239525
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 500 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 496 | trunk passed |
   | +1 | compile | 246 | trunk passed |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 830 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 512 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 464 | the patch passed |
   | -1 | compile | 88 | hadoop-hdds in the patch failed. |
   | -1 | javac | 88 | hadoop-hdds in the patch failed. |
   | -0 | checkstyle | 34 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | -1 | findbugs | 112 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 27 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | 0 | asflicense | 28 | ASF License check generated no output? |
   | | | 4756 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/987 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d652ac3b24b7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #977: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #977: HDFS-14541. When evictableMmapped or 
evictable size is zero, do not throw NoSuchElementException
URL: https://github.com/apache/hadoop/pull/977#issuecomment-505233126
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 12 | https://github.com/apache/hadoop/pull/977 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/977 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-977/6/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #977: HDFS-14541. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException

2019-06-24 Thread GitBox
goiri merged pull request #977: HDFS-14541. When evictableMmapped or evictable 
size is zero, do not throw NoSuchElementException
URL: https://github.com/apache/hadoop/pull/977
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #930: HDDS-1651. Create a 
http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#discussion_r295128201
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -31,6 +31,7 @@
 import java.util.Optional;
 import java.util.TimeZone;
 
+import org.apache.hadoop.HadoopIllegalArgumentException;
 
 Review comment:
   Minor: Unused import.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-24 Thread GitBox
bharatviswa504 edited a comment on issue #930: HDDS-1651. Create a http.policy 
config for Ozone
URL: https://github.com/apache/hadoop/pull/930#issuecomment-505229785
 
 
   Hi,
   I have a question:
   What is the reason to create a new http policy for ozone? Because when https 
is enabled, some of the additional config like key store location we still use 
dfs.https.server.keystore.resource. 
   
   I feel ozone can also re-use the hdfs config. In this way, we don't miss the 
code changes where HTTP policy is being used, like OzoneManagerSnapShotProvider 
which uses still DFSUtils.getHttpPolicy.
   
   And also we use to create HttpServer2.Builder builder with below code. That 
again uses dfs.http.policy. So, this is the reason we need to set 
ozone.http.policy to dfs.http.policy to make it work. So, I feel to avoid all 
these, we can use dfs.http.policy as before. Let me know your thoughts on this.
   ```
 builder = DFSUtil.httpServerTemplateForNNAndJN(conf, this.httpAddress,
 this.httpsAddress, name, getSpnegoPrincipal(), getKeytabFile());
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #930: HDDS-1651. Create a http.policy config for Ozone

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #930: HDDS-1651. Create a http.policy config 
for Ozone
URL: https://github.com/apache/hadoop/pull/930#issuecomment-505229785
 
 
   Hi,
   I have a question:
   is there a reason to create a new http policy for config. Because when https 
is enabled, some of the additional config like key store location we use 
dfs.https.server.keystore.resource. 
   
   I feel ozone can also re-use the hdfs config. In this way, we don't miss the 
code changes where HTTP policy is being used, like OzoneManagerSnapShotProvider 
which uses still DFSUtils.getHttpPolicy.
   And also we use to create HttpServer2.Builder builder with below code. That 
again uses dfs.http.policy. So, this is the reason we need to set 
ozone.http.policy to dfs.http.policy to make it work. So, I feel to avoid all 
these, we can use dfs.http.policy as before. Let me know your thoughts on this.
   ```
 builder = DFSUtil.httpServerTemplateForNNAndJN(conf, this.httpAddress,
 this.httpsAddress, name, getSpnegoPrincipal(), getKeytabFile());
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505228694
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 893 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 475 | trunk passed |
   | +1 | compile | 243 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 892 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 421 | the patch passed |
   | +1 | compile | 247 | the patch passed |
   | +1 | javac | 247 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | +1 | findbugs | 505 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 262 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1710 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7548 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 95409dcc34c1 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/10/testReport/ |
   | Max. process+thread count | 5389 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #880: HDDS-1617. Restructure the code layout for Ozone Manager

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #880: HDDS-1617. Restructure the code layout 
for Ozone Manager
URL: https://github.com/apache/hadoop/pull/880#issuecomment-505226258
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 7 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 42 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 469 | trunk passed |
   | +1 | compile | 243 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 309 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 429 | the patch passed |
   | +1 | compile | 263 | the patch passed |
   | +1 | cc | 263 | the patch passed |
   | +1 | javac | 263 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 11 new + 0 
unchanged - 0 fixed = 11 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   | +1 | findbugs | 518 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 242 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1095 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6121 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-880/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/880 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5cc20d5333f3 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-880/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-880/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-880/2/testReport/ |
   | Max. process+thread count | 5068 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozonefs hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-880/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 
DirectoryStagingCommitter: destination path exists
URL: https://github.com/apache/hadoop/pull/992#issuecomment-505224090
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 91 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1294 | trunk passed |
   | +1 | compile | 964 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | +1 | mvnsite | 123 | trunk passed |
   | +1 | shadedclient | 1016 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 94 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 180 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 77 | the patch passed |
   | +1 | compile | 924 | the patch passed |
   | +1 | javac | 924 | the patch passed |
   | -0 | checkstyle | 146 | root: The patch generated 2 new + 15 unchanged - 0 
fixed = 17 total (was 15) |
   | +1 | mvnsite | 118 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 706 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 92 | the patch passed |
   | +1 | findbugs | 196 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 535 | hadoop-common in the patch passed. |
   | +1 | unit | 289 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 7088 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-992/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/992 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 93bda9c4257e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-992/7/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-992/7/testReport/ |
   | Max. process+thread count | 1347 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-992/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #993: HDDS-1709. TestScmSafeNode is flaky

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #993: HDDS-1709. TestScmSafeNode is flaky
URL: https://github.com/apache/hadoop/pull/993#issuecomment-505223323
 
 
   Thank You @elek for reporting and fixing the issue.
   +1 LGTM. Can you take care of checkstyle issue during the commit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #943: HDDS-1666. Issue in openKey when allocating block.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #943: HDDS-1666. Issue in openKey when 
allocating block.
URL: https://github.com/apache/hadoop/pull/943#issuecomment-505223055
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 471 | trunk passed |
   | +1 | compile | 241 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 787 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 345 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 558 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 464 | the patch passed |
   | +1 | compile | 256 | the patch passed |
   | +1 | javac | 256 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 658 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 586 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1689 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 6585 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestKeyManagerImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-943/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/943 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cd17417896f4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-943/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-943/2/testReport/ |
   | Max. process+thread count | 4169 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-943/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #981: HDDS-1696. RocksDB use separate 
Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981#issuecomment-505219413
 
 
   Closing this, as there is a way to pass a .ini file and read the values and 
set the required rocksdb.
   Thank You @anuengineer and @arp7 for the info.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #981: HDDS-1696. RocksDB use separate Write-ahead-log location for OM RocksDB.

2019-06-24 Thread GitBox
bharatviswa504 closed pull request #981: HDDS-1696. RocksDB use separate 
Write-ahead-log location for OM RocksDB.
URL: https://github.com/apache/hadoop/pull/981
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505219271
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 449 | trunk passed |
   | +1 | compile | 241 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 497 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 412 | the patch passed |
   | +1 | compile | 238 | the patch passed |
   | +1 | javac | 238 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 611 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 233 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1058 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 5743 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b89d15c3ec24 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/9/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/9/testReport/ |
   | Max. process+thread count | 4563 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' query param to containers…

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #987: HDDS-1685. Recon: Add support for 'start' 
query param to containers…
URL: https://github.com/apache/hadoop/pull/987#issuecomment-505217793
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/987 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/987 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-987/3/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505216398
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 472 | trunk passed |
   | +1 | compile | 256 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 799 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 493 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 417 | the patch passed |
   | +1 | compile | 242 | the patch passed |
   | +1 | javac | 242 | the patch passed |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 623 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | the patch passed |
   | +1 | findbugs | 501 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 229 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1170 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5859 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux afba5320c9cd 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/8/testReport/ |
   | Max. process+thread count | 5072 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871865#comment-16871865
 ] 

Hadoop QA commented on HADOOP-16350:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.8 has 
1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.8 
has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 

[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505212524
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 499 | trunk passed |
   | +1 | compile | 240 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 852 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 308 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 501 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for patch |
   | +1 | mvninstall | 449 | the patch passed |
   | +1 | compile | 246 | the patch passed |
   | +1 | cc | 246 | the patch passed |
   | +1 | javac | 246 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 514 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 255 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1394 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6434 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d47db4dcb237 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/11/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/11/testReport/ |
   | Max. process+thread count | 4450 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16363) S3Guard DDB store prune() doesn't translate AWS exceptions to IOEs

2019-06-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16363.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

Fixed in the HADOOP-15183 patch

> S3Guard DDB store prune() doesn't translate AWS exceptions to IOEs
> --
>
> Key: HADOOP-16363
> URL: https://issues.apache.org/jira/browse/HADOOP-16363
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> Fixing in HADOOP-15183: if you call prune() against a nonexist DDB table, the 
> exception isn't being translated into an IOE.
> This is interesting as the codepath is going through retry(), it's just that 
> where the IO is taking place is happening inside the iterator, and we don't 
> have checks there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.

2019-06-24 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13454.
-
Resolution: Done

we've effectively done this

> S3Guard: Provide custom FileSystem Statistics.
> --
>
> Key: HADOOP-13454
> URL: https://issues.apache.org/jira/browse/HADOOP-13454
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha3
>Reporter: Chris Nauroth
>Priority: Major
>
> Provide custom {{FileSystem}} {{Statistics}} with information about the 
> internal operational details of S3Guard.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #880: HDDS-1617. Restructure the code layout for Ozone Manager

2019-06-24 Thread GitBox
anuengineer commented on issue #880: HDDS-1617. Restructure the code layout for 
Ozone Manager
URL: https://github.com/apache/hadoop/pull/880#issuecomment-505204742
 
 
   Rebased to the latest trunk


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505203741
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 489 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 900 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 517 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 548 | the patch passed |
   | +1 | compile | 590 | the patch passed |
   | +1 | cc | 590 | the patch passed |
   | +1 | javac | 590 | the patch passed |
   | +1 | checkstyle | 261 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 897 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 441 | the patch passed |
   | +1 | findbugs | 1248 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 187 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1360 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8475 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3b066f2d865d 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/7/testReport/ |
   | Max. process+thread count | 5161 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer opened a new pull request #880: HDDS-1617. Restructure the code layout for Ozone Manager

2019-06-24 Thread GitBox
anuengineer opened a new pull request #880: HDDS-1617. Restructure the code 
layout for Ozone Manager
URL: https://github.com/apache/hadoop/pull/880
 
 
   * Move Volume Management to core.volume under Ozone Manager.
   * Move Bucket Management to core.bucket package under Ozone Manager.
   * Move Key Management functions to core.Keys package.
   * Move S3 and FS to Ozone Manager core package.
   * Move Metrics, discovery, persistence, security to its own package 
under ozone manager.
   * Rename web to commandLine, since the Rest interface is not used any 
more since we have S3.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15076) Enhance S3A troubleshooting documents and add a performance document

2019-06-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871834#comment-16871834
 ] 

Steve Loughran commented on HADOOP-15076:
-

no reason why it won't, just need to make sure that when we pulled s3n (when 
was that -3.0 or 3.1?) that we don't do that on the backport...though do tell 
people it's really gone.

FWIW, AWS's pending move to V4-only auth is going to complicate a lot of 
things: expect more people with trouble very soon

> Enhance S3A troubleshooting documents and add a performance document
> 
>
> Key: HADOOP-15076
> URL: https://issues.apache.org/jira/browse/HADOOP-15076
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: HADOOP-15076-001.patch, HADOOP-15076-002.patch, 
> HADOOP-15076-003.patch, HADOOP-15076-004.patch, HADOOP-15076-005.patch, 
> HADOOP-15076-006.patch
>
>
> A recurrent theme in s3a-related JIRAs, support calls etc is "tried upgrading 
> the AWS SDK JAR and then I got the error ...". We know here "don't do that", 
> but its not something immediately obvious to lots of downstream users who 
> want to be able to drop in the new JAR to fix things/add new features
> We need to spell this out quite clearlyi "you cannot safely expect to do 
> this. If you want to upgrade the SDK, you will need to rebuild the whole of 
> hadoop-aws with the maven POM updated to the latest version, ideally 
> rerunning all the tests to make sure something hasn't broken. 
> Maybe near the top of the index.md file, along with "never share your AWS 
> credentials with anyone"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16377) Moving logging APIs over to slf4j

2019-06-24 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871832#comment-16871832
 ] 

Steve Loughran commented on HADOOP-16377:
-

Core patch LGTM; I see the jenkins results are already missing. Can you do a 
rebase & resubmit so I can see those results. 

I think this may be good to break up into individual parts before actually 
committing, especially looking at the Azure set which will need an azure test 
run before going in ( I can help there); we can have one for hdfs, yarn, 
submarine, all following on from hadoop-common

I'm thinking here about what is best for backporting a patch which spans a lot 
of code...I've been backporting bits of the hadoop-aws and hadoop-azure 
modules, and it makes for simpler work if I could just cherrypick in a commit 
for a single module.

This doesn't mean changes for this patch (apart from that test run, just that 
when it goes in we create some subtasks for this one and commit each patch with 
that ID)


> Moving logging APIs over to slf4j
> -
>
> Key: HADOOP-16377
> URL: https://issues.apache.org/jira/browse/HADOOP-16377
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16357-002.patch, HADOOP-16377-001.patch, 
> HADOOP-16377-003.patch, HADOOP-16377-004.patch, HADOOP-16377-005.patch
>
>
> As of today, there are still 50 references to log4j1
> {code}
> $ grep -r "import org.apache.commons.logging.Log;" . |wc - l
>   50
> {code}
> To achieve the goal of HADOOP-12956/HADOOP-16206, we should invest time to 
> move them to slf4j



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505201888
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 509 | trunk passed |
   | +1 | compile | 288 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1004 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 550 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 144 | hadoop-ozone in the patch failed. |
   | -1 | compile | 58 | hadoop-ozone in the patch failed. |
   | -1 | cc | 58 | hadoop-ozone in the patch failed. |
   | -1 | javac | 58 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 783 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | -1 | findbugs | 110 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 274 | hadoop-hdds in the patch passed. |
   | -1 | unit | 111 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5101 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1840a3411380 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/testReport/ |
   | Max. process+thread count | 404 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15729) [s3a] stop treat fs.s3a.max.threads as the long-term minimum

2019-06-24 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871830#comment-16871830
 ] 

Sean Mackrory commented on HADOOP-15729:


Attach a patch with my proposed change. Unfortunately, it has a bigger impact 
on simple things like 1 GB upload than I thought, although it's hard to be sure 
it's not noise. See below for numbers to upload a 1GB file from a machine in 
us-west-2 to a bucket / pre-created DynamoDB table in the same region. Maybe 
this is worth adding fs.s3a.core.threads with a moderate default value. 
Long-running processes (like Hive Server2) might access many buckets and 
fs.s3a.max.threads grows absolutely out of control - core threads could still 
do the same unless it's *much* lower, in which case you'd easily hit this 
performance regression anyway. I would suggest we just proceed and consider 
fs.s3a.core.threads if further performance testing reveals an issue. Thoughts?

Without change:
{code}
real0m27.415s
user0m25.128s
sys 0m6.377s

real0m25.360s
user0m25.081s
sys 0m6.368s

real0m27.615s
user0m25.296s
sys 0m6.015s

real0m25.001s
user0m25.408s
sys 0m6.717s

real0m28.083s
user0m24.764s
sys 0m5.774s

real0m26.117s
user0m25.192s
sys 0m5.867s
{code}

With change:
{code}
real0m28.928s
user0m24.182s
sys 0m5.699s

real0m33.359s
user0m25.508s
sys 0m6.407s

real0m44.412s
user0m24.565s
sys 0m6.226s

real0m27.469s
user0m25.326s
sys 0m6.142s

real0m35.660s
user0m25.206s
sys 0m6.154s

real0m31.811s
user0m25.042s
sys 0m6.057s
{code}

> [s3a] stop treat fs.s3a.max.threads as the long-term minimum
> 
>
> Key: HADOOP-15729
> URL: https://issues.apache.org/jira/browse/HADOOP-15729
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15729.001.patch
>
>
> A while ago the s3a connector started experiencing deadlocks because the AWS 
> SDK requires an unbounded threadpool. It places monitoring tasks on the work 
> queue before the tasks they wait on, so it's possible (has even happened with 
> larger-than-default threadpools) for the executor to become permanently 
> saturated and deadlock.
> So we started giving an unbounded threadpool executor to the SDK, and using a 
> bounded, blocking threadpool service for everything else S3A needs (although 
> currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then 
> only limits this threadpool, however we also specified fs.s3a.max.threads as 
> the number of core threads in the unbounded threadpool, which in hindsight is 
> pretty terrible.
> Currently those core threads do not timeout, so this is actually setting a 
> sort of minimum. Once that many tasks have been submitted, the threadpool 
> will be locked at that number until it bursts beyond that, but it will only 
> spin down that far. If fs.s3a.max.threads is set reasonably high and someone 
> uses a bunch of S3 buckets, they could easily have thousands of idle threads 
> constantly.
> We should either not use fs.s3a.max.threads for the corepool size and 
> introduce a new configuration, or we should simply allow core threads to 
> timeout. I'm reading the OpenJDK source now to see what subtle differences 
> there are between core threads and other threads if core threads can timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15729) [s3a] stop treat fs.s3a.max.threads as the long-term minimum

2019-06-24 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15729:
---
Attachment: HADOOP-15729.001.patch

> [s3a] stop treat fs.s3a.max.threads as the long-term minimum
> 
>
> Key: HADOOP-15729
> URL: https://issues.apache.org/jira/browse/HADOOP-15729
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15729.001.patch
>
>
> A while ago the s3a connector started experiencing deadlocks because the AWS 
> SDK requires an unbounded threadpool. It places monitoring tasks on the work 
> queue before the tasks they wait on, so it's possible (has even happened with 
> larger-than-default threadpools) for the executor to become permanently 
> saturated and deadlock.
> So we started giving an unbounded threadpool executor to the SDK, and using a 
> bounded, blocking threadpool service for everything else S3A needs (although 
> currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then 
> only limits this threadpool, however we also specified fs.s3a.max.threads as 
> the number of core threads in the unbounded threadpool, which in hindsight is 
> pretty terrible.
> Currently those core threads do not timeout, so this is actually setting a 
> sort of minimum. Once that many tasks have been submitted, the threadpool 
> will be locked at that number until it bursts beyond that, but it will only 
> spin down that far. If fs.s3a.max.threads is set reasonably high and someone 
> uses a bunch of S3 buckets, they could easily have thousands of idle threads 
> constantly.
> We should either not use fs.s3a.max.threads for the corepool size and 
> introduce a new configuration, or we should simply allow core threads to 
> timeout. I'm reading the OpenJDK source now to see what subtle differences 
> there are between core threads and other threads if core threads can timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists

2019-06-24 Thread GitBox
steveloughran commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 
DirectoryStagingCommitter: destination path exists
URL: https://github.com/apache/hadoop/pull/992#issuecomment-505197183
 
 
   Gabor, fixed that; also rebased to trunk.
   
   * I have not done a full itest on this yet*


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #880: HDDS-1617. Restructure the code layout for Ozone Manager

2019-06-24 Thread GitBox
anuengineer closed pull request #880: HDDS-1617. Restructure the code layout 
for Ozone Manager
URL: https://github.com/apache/hadoop/pull/880
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505196534
 
 
   Thank You @anuengineer for the review.
   I have addressed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296926683
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  int compare = newUserResource.compareTo(oldUserResource);
+  if (compare < 0) {
+manager.lock(newUserResource);
+try {
+  manager.lock(oldUserResource);
+} catch (Exception ex) {
+  // We got an exception acquiring 2nd user lock. Release already
+  // acquired user lock, and throw exception to the user.
+  manager.unlock(oldUserResource);
+  

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296931497
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  int compare = newUserResource.compareTo(oldUserResource);
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With 

[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505196036
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 495 | trunk passed |
   | +1 | compile | 246 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | -1 | mvninstall | 129 | hadoop-ozone in the patch failed. |
   | -1 | compile | 55 | hadoop-ozone in the patch failed. |
   | -1 | cc | 55 | hadoop-ozone in the patch failed. |
   | -1 | javac | 55 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 658 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | -1 | findbugs | 98 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 240 | hadoop-hdds in the patch failed. |
   | -1 | unit | 101 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4445 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 0f7b501ec50b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/testReport/ |
   | Max. process+thread count | 580 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists

2019-06-24 Thread GitBox
steveloughran commented on issue #992: HADOOP-16537 TeraSort Job failing on S3 
DirectoryStagingCommitter: destination path exists
URL: https://github.com/apache/hadoop/pull/992#issuecomment-505194091
 
 
   @bgaborg yes, seeing that too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296926683
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  int compare = newUserResource.compareTo(oldUserResource);
+  if (compare < 0) {
+manager.lock(newUserResource);
+try {
+  manager.lock(oldUserResource);
+} catch (Exception ex) {
+  // We got an exception acquiring 2nd user lock. Release already
+  // acquired user lock, and throw exception to the user.
+  manager.unlock(oldUserResource);
+  

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296925887
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505188933
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 519 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 485 | trunk passed |
   | +1 | compile | 257 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 830 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 319 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 517 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | -1 | mvninstall | 132 | hadoop-ozone in the patch failed. |
   | -1 | compile | 49 | hadoop-ozone in the patch failed. |
   | -1 | cc | 49 | hadoop-ozone in the patch failed. |
   | -1 | javac | 49 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 64 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 607 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 104 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 259 | hadoop-hdds in the patch passed. |
   | -1 | unit | 108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4977 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 00ed5d22ed66 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/testReport/ |
   | Max. process+thread count | 519 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296923309
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
 
 Review comment:
   Yes, when we acquire lock from the OzoneManager methods like createVolume 
etc., we do those checks and then call acquire the lock. For now, I will avoid 
the checks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296923309
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
 
 Review comment:
   Yes, when we acquire lock from the OzoneManager methods like createVolume 
etc.,, we do those checks and then call acquire the lock. For now, I will avoid 
them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #1006: HDDS-1723. Create 
new OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296921839
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
 
 Review comment:
   I have added an isLevelLocked function and still left the getCurrentLocks() 
in the OzoneManagerLock itself.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505186383
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 466 | trunk passed |
   | +1 | compile | 242 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 853 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 301 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 496 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | compile | 246 | the patch passed |
   | +1 | cc | 246 | the patch passed |
   | +1 | javac | 246 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | the patch passed |
   | +1 | findbugs | 524 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 145 | hadoop-hdds in the patch failed. |
   | -1 | unit | 149 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4897 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.om.request.TestOMClientRequestWithUserInfo |
   |   | hadoop.ozone.security.TestOzoneManagerBlockToken |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 71c06b814939 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/6/testReport/ |
   | Max. process+thread count | 1280 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #956: HDDS-1638.  Implement Key Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505184184
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #956: HDDS-1638.  
Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296916408
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -495,12 +495,27 @@ public boolean isVolumeEmpty(String volume) throws 
IOException {
   public boolean isBucketEmpty(String volume, String bucket)
   throws IOException {
 String keyPrefix = getBucketKey(volume, bucket);
-//TODO: When Key ops are converted in to HA model, use cache also to
-// determine bucket is empty or not.
+
+// First check in key table cache.
+Iterator, CacheValue>> iterator =
+((TypedTable< String, OmKeyInfo>) keyTable).cacheIterator();
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry is not for delete key request.
+  if (key.startsWith(keyPrefix) && omKeyInfo != null) {
+return false;
+  }
+}
 try (TableIterator> keyIter =
 keyTable.iterator()) {
   KeyValue kv = keyIter.seek(keyPrefix);
-  if (kv != null && kv.getKey().startsWith(keyPrefix)) {
+  // During iteration from DB, check in mean time if this key is not
+  // marked for delete.
+  if (kv != null && kv.getKey().startsWith(keyPrefix) &&
+  keyTable.get(kv.getKey()) != null) {
 
 Review comment:
   Actually thinking more, we don't need 2nd check at all.
   
   Because to delete Bucket, we acquire Bucket lock, and then check 
isBucketEmpty. So during this time, no operations can be added to the cache.
   
   So in isBucketEmpty, first, check the cache, and then next DB. I feel that 
should be sufficient. Let me know if i am missing something here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on issue #956: HDDS-1638.  Implement Key Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505175600
 
 
   Thank You @hanishakoneru for the review.
   I have addressed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #956: HDDS-1638.  
Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296908937
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -495,12 +495,27 @@ public boolean isVolumeEmpty(String volume) throws 
IOException {
   public boolean isBucketEmpty(String volume, String bucket)
   throws IOException {
 String keyPrefix = getBucketKey(volume, bucket);
-//TODO: When Key ops are converted in to HA model, use cache also to
-// determine bucket is empty or not.
+
+// First check in key table cache.
+Iterator, CacheValue>> iterator =
+((TypedTable< String, OmKeyInfo>) keyTable).cacheIterator();
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry is not for delete key request.
+  if (key.startsWith(keyPrefix) && omKeyInfo != null) {
+return false;
+  }
+}
 try (TableIterator> keyIter =
 keyTable.iterator()) {
   KeyValue kv = keyIter.seek(keyPrefix);
-  if (kv != null && kv.getKey().startsWith(keyPrefix)) {
+  // During iteration from DB, check in mean time if this key is not
+  // marked for delete.
+  if (kv != null && kv.getKey().startsWith(keyPrefix) &&
+  keyTable.get(kv.getKey()) != null) {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16378) RawLocalFileStatus throws exception if a file is created and deleted quickly

2019-06-24 Thread K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871768#comment-16871768
 ] 

K S commented on HADOOP-16378:
--

Eh, it'll be a little difficult to reproduce. We discovered the error by 
mistake when running company software, and managed to reproduce it by running a 
set of programs and running a bash script to quickly generate and delete files 
that start with "." I will try to reproduce it tomorrow evening

> RawLocalFileStatus throws exception if a file is created and deleted quickly
> 
>
> Key: HADOOP-16378
> URL: https://issues.apache.org/jira/browse/HADOOP-16378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
> Environment: Ubuntu 18.04, Hadoop 2.7.3 (Though this problem exists 
> on later versions of Hadoop as well), Java 8 ( + Java 11).
>Reporter: K S
>Priority: Critical
>
> Bug occurs when NFS creates temporary ".nfs*" files as part of file moves and 
> accesses. If this file is deleted very quickly after being created, a 
> RuntimeException is thrown. The root cause is in the loadPermissionInfo 
> method in org.apache.hadoop.fs.RawLocalFileSystem. To get the permission 
> info, it first does
>  
> {code:java}
> ls -ld{code}
>  and then attempts to get permissions info about each file. If a file 
> disappears between these two steps, an exception is thrown.
> *Reproduction Steps:*
> An isolated way to reproduce the bug is to run FileInputFormat.listStatus 
> over and over on the same dir that we’re creating those temp files in. On 
> Ubuntu or any other Linux-based system, this should fail intermittently
> *Fix:*
> One way in which we managed to fix this was to ignore the exception being 
> thrown in loadPemissionInfo() if the exit code is 1 or 2. Alternatively, it's 
> possible that turning "useDeprecatedFileStatus" off in RawLocalFileSystem 
> would fix this issue, though we never tested this, and the flag was 
> implemented to fix -HADOOP-9652-. Could also fix in conjunction with 
> HADOOP-8772.
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #956: HDDS-1638.  
Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296898239
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -633,7 +633,7 @@ public void commitKey(OmKeyArgs args, long clientID) 
throws IOException {
   OmKeyInfo keyInfo = metadataManager.getOpenKeyTable().get(openKey);
   if (keyInfo == null) {
 throw new OMException("Commit a key without corresponding entry " +
-objectKey, KEY_NOT_FOUND);
+openKey, KEY_NOT_FOUND);
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in …

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider 
instance in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009#issuecomment-505161880
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 507 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1156 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 711 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | -0 | checkstyle | 17 | hadoop-tools/hadoop-aws: The patch generated 23 new 
+ 53 unchanged - 2 fixed = 76 total (was 55) |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 715 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | the patch passed |
   | +1 | findbugs | 61 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 278 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3857 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1009 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a02e6fd900d2 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 129576f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/testReport/ |
   | Max. process+thread count | 416 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1009/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505161191
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 471 | trunk passed |
   | +1 | compile | 237 | trunk passed |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 783 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 319 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 441 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | cc | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 633 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 516 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 141 | hadoop-hdds in the patch failed. |
   | -1 | unit | 38 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 4732 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 37866923afde 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 719d57b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/5/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #956: HDDS-1638.  
Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296893191
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -283,6 +293,13 @@
   private long maxUserVolumeCount;
 
 
+  private final ScmClient scmClient;
+  private final long scmBlockSize;
+  private final int preallocateBlocksMax;
+  private final boolean grpcBlockTokenEnabled;
+  private final boolean useRatis;
+
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
bharatviswa504 commented on a change in pull request #956: HDDS-1638.  
Implement Key Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#discussion_r296892339
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -712,6 +716,9 @@ message ListStatusResponse {
 
 message CreateKeyRequest {
 required KeyArgs keyArgs = 1;
+// Set in OM HA during preExecute step. This way all OM's use same ID in
+// OM HA.
+optional uint64 ID = 2;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871695#comment-16871695
 ] 

Ajay Kumar edited comment on HADOOP-16350 at 6/24/19 8:02 PM:
--

[~gss2002] thanks for the initial patch and detailed description of the issue. 
[~arp], [~xyao], [~szetszwo] thanks for reviews. Commited patch v5 to trunk. 
Will upload patch for branch 2.8 . 


was (Author: ajayydv):
[~gss2002] thanks for the initial patch and detailed description of the issue. 
[~arp], [~xyao], [~szetszwo] thanks for reviews. Commited patch v5 to trunk. 
Will upload patch for HADOOP-2.7 . 

> Ability to tell HDFS client not to request KMS Information from NameNode
> 
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350-branch-2.8.patch, HADOOP-16350.00.patch, 
> HADOOP-16350.01.patch, HADOOP-16350.02.patch, HADOOP-16350.03.patch, 
> HADOOP-16350.04.patch, HADOOP-16350.05.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO 

[jira] [Updated] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-16350:

Attachment: HADOOP-16350-branch-2.8.patch

> Ability to tell HDFS client not to request KMS Information from NameNode
> 
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350-branch-2.8.patch, HADOOP-16350.00.patch, 
> HADOOP-16350.01.patch, HADOOP-16350.02.patch, HADOOP-16350.03.patch, 
> HADOOP-16350.04.patch, HADOOP-16350.05.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 556079 for gss2002 on ha-hdfs:tech
> 19/05/29 14:06:10 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: 

[GitHub] [hadoop] hadoop-yetus commented on issue #956: HDDS-1638. Implement Key Write Requests to use Cache and DoubleBuffer.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #956: HDDS-1638.  Implement Key Write Requests 
to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/956#issuecomment-505156417
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | -1 | mvninstall | 151 | hadoop-ozone in trunk failed. |
   | +1 | compile | 292 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 514 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 433 | the patch passed |
   | +1 | compile | 235 | the patch passed |
   | +1 | cc | 235 | the patch passed |
   | +1 | javac | 235 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 622 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | the patch passed |
   | +1 | findbugs | 522 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 242 | hadoop-hdds in the patch passed. |
   | -1 | unit | 983 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5680 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/956 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3e78da4b47c0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 719d57b |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/testReport/ |
   | Max. process+thread count | 4897 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-956/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #977: (HDFS-14541)when evictableMmapped or evictable size is zero, do not throw NoSuchE…

2019-06-24 Thread GitBox
goiri commented on issue #977: (HDFS-14541)when evictableMmapped or evictable 
size is zero, do not throw NoSuchE… 
URL: https://github.com/apache/hadoop/pull/977#issuecomment-505151718
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871719#comment-16871719
 ] 

Hudson commented on HADOOP-16350:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16815 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16815/])
HADOOP-16350. Ability to tell HDFS client not to request KMS Information (ajay: 
rev 95c94dcca71a41e56a4c2989cf2aefdaf9923e13)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/HdfsKMSUtil.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java


> Ability to tell HDFS client not to request KMS Information from NameNode
> 
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch, 
> HADOOP-16350.02.patch, HADOOP-16350.03.patch, HADOOP-16350.04.patch, 
> HADOOP-16350.05.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: 

[GitHub] [hadoop] anuengineer commented on issue #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-06-24 Thread GitBox
anuengineer commented on issue #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979#issuecomment-505142876
 
 
   @elek  Could you please rebase this when you get a chance ? Thanks in 
advance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in …

2019-06-24 Thread GitBox
bgaborg commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider instance 
in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009#issuecomment-505141723
 
 
   Yes, org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir is failing 
even without my patch on trunk! It would worth looking into it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in …

2019-06-24 Thread GitBox
bgaborg commented on issue #1009: HADOOP-16383. Pass ITtlTimeProvider instance 
in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009#issuecomment-505140565
 
 
   Tests run against ireland. Got a few errors in the sequential tests:
   ```
   [ERROR] Failures:
   [ERROR]   
ITestS3AContractRootDir.testListEmptyRootDirectory:63->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:192->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:88
 listFiles(/, true).hasNext
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:222->Assert.assertTrue:41->Assert.fail:88
 files mismatch: between
 "s3a://gabota-versioned-bucket-ireland/file.txt"
   ] and
 "s3a://gabota-versioned-bucket-ireland/file.txt"
 "s3a://gabota-versioned-bucket-ireland/fork-0003/test/testSelectEmptyFile"
 
"s3a://gabota-versioned-bucket-ireland/fork-0003/test/testSelectEmptyFileWithConditions"
   ]
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive:132->Assert.fail:88
 non recursive delete should have raised an exception, but completed with exit 
code true
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmRootRecursive:157->AbstractFSContractTestBase.assertPathDoesNotExist:305->Assert.fail:88
 expected file to be deleted: unexpectedly found /testRmRootRecursive as  
S3AFileStatus{path=s3a://gabota-versioned-bucket-ireland/testRmRootRecursive; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1561403043000; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE 
eTag=d41d8cd98f00b204e9800998ecf8427e versionId=w4._phxXR86E6rKJFTQ4VunZaDEo1wbU
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testSimpleRootListing:207->Assert.assertEquals:631->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 expected:<1> but was:<2>
   ```
   
   I don't think that those I related, and will go away if I clear the bucket 
and use a fresh ddb table. I'll check tomorrow on what can be the cause on this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #1009: HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in …

2019-06-24 Thread GitBox
bgaborg opened a new pull request #1009: HADOOP-16383. Pass ITtlTimeProvider 
instance in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009
 
 
   …MetadataStore interface


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #945: HDDS-1646. Support real persistence in the k8s example files

2019-06-24 Thread GitBox
anuengineer merged pull request #945: HDDS-1646. Support real persistence in 
the k8s example files 
URL: https://github.com/apache/hadoop/pull/945
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #945: HDDS-1646. Support real persistence in the k8s example files

2019-06-24 Thread GitBox
anuengineer commented on issue #945: HDDS-1646. Support real persistence in the 
k8s example files 
URL: https://github.com/apache/hadoop/pull/945#issuecomment-505137979
 
 
   +1, Thanks for the patch. I will commit this now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16350) Ability to tell HDFS client not to request KMS Information from NameNode

2019-06-24 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16871695#comment-16871695
 ] 

Ajay Kumar commented on HADOOP-16350:
-

[~gss2002] thanks for the initial patch and detailed description of the issue. 
[~arp], [~xyao], [~szetszwo] thanks for reviews. Commited patch v5 to trunk. 
Will upload patch for HADOOP-2.7 . 

> Ability to tell HDFS client not to request KMS Information from NameNode
> 
>
> Key: HADOOP-16350
> URL: https://issues.apache.org/jira/browse/HADOOP-16350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 2.8.3, 3.0.0, 2.7.6, 3.1.2
>Reporter: Greg Senia
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16350.00.patch, HADOOP-16350.01.patch, 
> HADOOP-16350.02.patch, HADOOP-16350.03.patch, HADOOP-16350.04.patch, 
> HADOOP-16350.05.patch
>
>
> Before HADOOP-14104 Remote KMSServer URIs were not requested from the remote 
> NameNode and their associated remote KMSServer delegation token. Many 
> customers were using this as a security feature to prevent TDE/Encryption 
> Zone data from being distcped to remote clusters. But there was still a use 
> case to allow distcp of data residing in folders that are not being encrypted 
> with a KMSProvider/Encrypted Zone.
> So after upgrading to a version of Hadoop that contained HADOOP-14104 distcp 
> now fails as we along with other customers (HDFS-13696) DO NOT allow 
> KMSServer endpoints to be exposed out of our cluster network as data residing 
> in these TDE/Zones contain very critical data that cannot be distcped between 
> clusters.
> I propose adding a new code block with the following custom property 
> "hadoop.security.kms.client.allow.remote.kms" it will default to "true" so 
> keeping current feature of HADOOP-14104 but if specified to "false" will 
> allow this area of code to operate as it did before HADOOP-14104. I can see 
> the value in HADOOP-14104 but the way Hadoop worked before this JIRA/Issue 
> should of at least had an option specified to allow Hadoop/KMS code to 
> operate similar to how it did before by not requesting remote KMSServer URIs 
> which would than attempt to get a delegation token even if not operating on 
> encrypted zones.
> Error when KMS Server traffic is not allowed between cluster networks per 
> enterprise security standard which cannot be changed they denied the request 
> for exception so the only solution is to allow a feature to not attempt to 
> request tokens. 
> {code:java}
> $ hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
> -Dmapreduce.job.hdfs-servers.token-renewal.exclude=tech 
> hdfs:///processed/public/opendata/samples/distcp_test/distcp_file.txt 
> hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt
> 19/05/29 14:06:09 INFO tools.DistCp: Input Options: DistCpOptions
> {atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, 
> numListstatusThreads=0, maxMaps=20, mapBandwidth=100, 
> sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], 
> preserveRawXattrs=false, atomicWorkPath=null, logPath=null, 
> sourceFileListing=null, 
> sourcePaths=[hdfs:/processed/public/opendata/samples/distcp_test/distcp_file.txt],
>  
> targetPath=hdfs://tech/processed/public/opendata/samples/distcp_test/distcp_file2.txt,
>  targetPathExists=true, filtersFile='null', verboseLog=false}
> 19/05/29 14:06:09 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 5093920 for gss2002 on ha-hdfs:unit
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:unit, Ident: (HDFS_DELEGATION_TOKEN 
> token 5093920 for gss2002)
> 19/05/29 14:06:10 INFO security.TokenCache: Got dt for hdfs://unit; Kind: 
> kms-dt, Service: ha21d53en.unit.hdp.example.com:9292, Ident: (owner=gss2002, 
> renewer=yarn, realUser=, issueDate=1559153170120, maxDate=1559757970120, 
> sequenceNumber=237, masterKeyId=2)
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; 
> dirCnt = 0
> 19/05/29 14:06:10 INFO tools.SimpleCopyListing: Build file listing completed.
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO tools.DistCp: Number of paths in the copy list: 1
> 19/05/29 14:06:10 INFO client.AHSProxy: Connecting to Application History 
> server at ha21d53mn.unit.hdp.example.com/10.70.49.2:10200
> 19/05/29 14:06:10 INFO hdfs.DFSClient: Created 

[GitHub] [hadoop] anuengineer commented on a change in pull request #945: HDDS-1646. Support real persistence in the k8s example files

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #945: HDDS-1646. Support real 
persistence in the k8s example files 
URL: https://github.com/apache/hadoop/pull/945#discussion_r296869755
 
 

 ##
 File path: hadoop-ozone/dist/src/main/k8s/definitions/ozone/om-ss.yaml
 ##
 @@ -36,6 +36,8 @@ spec:
 prometheus.io/port: "9874"
 prometheus.io/path: "/prom"
 spec:
+  securityContext:
+fsGroup: 1000
   initContainers:
   - name: init
 image: elek/ozone
 
 Review comment:
   I know, not part of this patch. But I hope we have a plan/JIRA to make this 
point to Ozone dockerhub. Thx


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common

2019-06-24 Thread GitBox
anuengineer merged pull request #969: HDDS-1597. Remove hdds-server-scm 
dependency from ozone-common
URL: https://github.com/apache/hadoop/pull/969
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #969: HDDS-1597. Remove hdds-server-scm dependency from ozone-common

2019-06-24 Thread GitBox
anuengineer commented on issue #969: HDDS-1597. Remove hdds-server-scm 
dependency from ozone-common
URL: https://github.com/apache/hadoop/pull/969#issuecomment-505135914
 
 
   +1, Thanks for the contribution and reviews.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296819487
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
 
 Review comment:
   nit: this can be moved to a helper function if possible. something like 
isLevelLocked?(Resource, lockSet); That way, there is a helper function which 
is available too. Or perhaps move this function into Resource like canlock ? 
like isLocked ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296861868
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  int compare = newUserResource.compareTo(oldUserResource);
+  if (compare < 0) {
+manager.lock(newUserResource);
+try {
+  manager.lock(oldUserResource);
+} catch (Exception ex) {
+  // We got an exception acquiring 2nd user lock. Release already
+  // acquired user lock, and throw exception to the user.
+  manager.unlock(oldUserResource);
+  throw 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296820349
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
 
 Review comment:
   Perhaps the caller is doing this if so ignore. Check for valid args?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296821090
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
 
 Review comment:
   Add a comment that explains why we are doing this? The lexical order 
comparison ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
anuengineer commented on a change in pull request #1006: HDDS-1723. Create new 
OzoneManagerLock class.
URL: https://github.com/apache/hadoop/pull/1006#discussion_r296858071
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.lock;
+
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.lock.LockManager;
+
+/**
+ * Provides different locks to handle concurrency in OzoneMaster.
+ * We also maintain lock hierarchy, based on the weight.
+ *
+ * 
+ *   
+ *   
+ *  WEIGHT   LOCK 
+ *   
+ *   
+ *  0   S3 Bucket Lock 
+ *   
+ *   
+ *  1   Volume Lock 
+ *   
+ *   
+ *  2   Bucket Lock 
+ *   
+ *   
+ *  3   User Lock 
+ *   
+ *   
+ *  4   S3 Secret Lock
+ *   
+ *   
+ *  5   Prefix Lock 
+ *   
+ * 
+ *
+ * One cannot obtain a lower weight lock while holding a lock with higher
+ * weight. The other way around is possible. 
+ * 
+ * 
+ * For example:
+ * 
+ * {@literal ->} acquire volume lock (will work)
+ *   {@literal +->} acquire bucket lock (will work)
+ * {@literal +-->} acquire s3 bucket lock (will throw Exception)
+ * 
+ * 
+ */
+
+public class OzoneManagerLock {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private final LockManager manager;
+  private final ThreadLocal lockSet = ThreadLocal.withInitial(
+  () -> Short.valueOf((short)0));
+
+
+  /**
+   * Creates new OzoneManagerLock instance.
+   * @param conf Configuration object
+   */
+  public OzoneManagerLock(Configuration conf) {
+manager = new LockManager<>(conf);
+  }
+
+  /**
+   * Acquire lock on resource.
+   *
+   * For S3_Bucket, VOLUME, BUCKET type resource, same thread acquiring lock
+   * again is allowed.
+   *
+   * For USER, PREFIX, S3_SECRET type resource, same thread acquiring lock
+   * again is not allowed.
+   *
+   * Special Note for UserLock: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resourceName - Resource name on which user want to acquire lock.
+   * @param resource - Type of the resource.
+   */
+  public void acquireLock(String resourceName, Resource resource) {
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  manager.lock(resourceName);
+  lockSet.set(resource.setLock(lockSet.get()));
+}
+  }
+
+  private String getErrorMessage(Resource resource) {
+return "Thread '" + Thread.currentThread().getName() + "' cannot " +
+"acquire " + resource.name + " lock while holding " +
+getCurrentLocks().toString() + " lock(s).";
+
+  }
+
+  private List getCurrentLocks() {
+List currentLocks = new ArrayList<>();
+int i=0;
+short lockSetVal = lockSet.get();
+for (Resource value : Resource.values()) {
+  if ((lockSetVal & value.setMask) == value.setMask) {
+currentLocks.add(value.name);
+  }
+}
+return currentLocks;
+  }
+
+  /**
+   * Acquire lock on multiple users.
+   * @param oldUserResource
+   * @param newUserResource
+   */
+  public void acquireMultiUserLock(String oldUserResource,
+  String newUserResource) {
+Resource resource = Resource.USER;
+if (!resource.canLock(lockSet.get())) {
+  String errorMessage = getErrorMessage(resource);
+  LOG.error(errorMessage);
+  throw new RuntimeException(errorMessage);
+} else {
+  int compare = newUserResource.compareTo(oldUserResource);
 
 Review comment:
   If you do a simple swap or create two string variables, you can eliminate a 
whole if ..else blocks and avoid repeating the locking code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to 

[GitHub] [hadoop] hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505129964
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 514 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 913 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 320 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 517 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 454 | the patch passed |
   | +1 | compile | 261 | the patch passed |
   | +1 | javac | 261 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 535 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1452 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6679 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0e143af120e5 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b220ec6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/7/testReport/ |
   | Max. process+thread count | 4781 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock class.

2019-06-24 Thread GitBox
hadoop-yetus commented on issue #1006: HDDS-1723. Create new OzoneManagerLock 
class.
URL: https://github.com/apache/hadoop/pull/1006#issuecomment-505127909
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 487 | trunk passed |
   | +1 | compile | 262 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 310 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 498 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 443 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 519 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 235 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1307 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6393 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1006 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 96075b2d0310 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b220ec6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/6/testReport/ |
   | Max. process+thread count | 5325 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1006/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >