[jira] [Commented] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop security

2019-05-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841900#comment-16841900
 ] 

Hadoop QA commented on HADOOP-16318:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-16318 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16318 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968057/HDFS-12433.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16254/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade JUnit from 4 to 5 in hadoop security
> 
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop security  (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-493313241
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 559 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1065 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 294 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 529 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 530 | the patch passed |
   | +1 | compile | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 552 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 204 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1915 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7326 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ecd9e7269767 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/testReport/ |
   | Max. process+thread count | 3676 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-799/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop security

2019-05-16 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841886#comment-16841886
 ] 

Akira Ajisaka edited comment on HADOOP-16318 at 5/17/19 3:59 AM:
-

Thanks [~kkori] for the patch!
* Are the unit test failures related to the patch? If the answer is yes, you 
need to fix the unit tests.
* Would you fix checkstyle warnings?
* All the change is in hadoop-common project. I'll move this issue from HDFS to 
HADOOP.


was (Author: ajisakaa):
Thanks [~kkori] for the patch!
* Are the unit test failures related to the patch? If the answer is yes, you 
need to fix the unit tests.
* Would you fix checkstyle warnings?
* All the change is in hadoop-common project. I'll move this issue from HDFS to 
HADOOP.

Now I am interested in how you created the patch. If you wrote a script for the 
patch, would you share it?


> Upgrade JUnit from 4 to 5 in hadoop security
> 
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop security  (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop security

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16318:
---
Summary: Upgrade JUnit from 4 to 5 in hadoop security  (was: Upgrade JUnit 
from 4 to 5 in hadoop-hdfs security)

> Upgrade JUnit from 4 to 5 in hadoop security
> 
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs security  
> (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop security

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16318:
---
Description: Upgrade JUnit from 4 to 5 in hadoop security  
(org.apache.hadoop.security)  (was: Upgrade JUnit from 4 to 5 in hadoop-hdfs 
security  (org.apache.hadoop.security))

> Upgrade JUnit from 4 to 5 in hadoop security
> 
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop security  (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop-hdfs security

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16318:
---
Issue Type: Sub-task  (was: Test)
Parent: HADOOP-14693

> Upgrade JUnit from 4 to 5 in hadoop-hdfs security
> -
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs security  
> (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16318) Upgrade JUnit from 4 to 5 in hadoop-hdfs security

2019-05-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka moved HDFS-12433 to HADOOP-16318:
---

Component/s: (was: test)
 test
Key: HADOOP-16318  (was: HDFS-12433)
Project: Hadoop Common  (was: Hadoop HDFS)

> Upgrade JUnit from 4 to 5 in hadoop-hdfs security
> -
>
> Key: HADOOP-16318
> URL: https://issues.apache.org/jira/browse/HADOOP-16318
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Ajay Kumar
>Assignee: Kei Kori
>Priority: Major
> Attachments: HDFS-12433.001.patch, HDFS-12433.002.patch
>
>
> Upgrade JUnit from 4 to 5 in hadoop-hdfs security  
> (org.apache.hadoop.security)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 opened a new pull request #829: HDDS-1550. MiniOzoneChaosCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-05-16 Thread GitBox
mukul1987 opened a new pull request #829: HDDS-1550. MiniOzoneChaosCluster is 
not shutting down all the threads during shutdown. Contributed by Mukul Kumar 
Singh.
URL: https://github.com/apache/hadoop/pull/829
 
 
   MiniOzoneCluster is not shutting down all the threads during shutdown. this 
patch tries to fix that issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #754: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-16 Thread GitBox
xiaoyuyao commented on a change in pull request #754: HDDS-1065. OM and DN 
should persist SCM certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/754#discussion_r284963127
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
 ##
 @@ -135,10 +135,11 @@ boolean verifySignature(byte[] data, byte[] signature,
*
* @param pemEncodedCert- pem encoded X509 Certificate
* @param force - override any existing file
+   * @param caCert- Is CA certificate.
* @throws CertificateException - on Error.
*
*/
-  void storeCertificate(String pemEncodedCert, boolean force)
+  void storeCertificate(String pemEncodedCert, boolean force, boolean caCert)
 
 Review comment:
   Agree, let's add a new function as @anuengineer suggested. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message 
for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#issuecomment-493299384
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 417 | trunk passed |
   | +1 | compile | 206 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 237 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 418 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 408 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | cc | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | the patch passed |
   | +1 | findbugs | 431 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 980 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 6866 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/828 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 0a46e72851d1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/testReport/ |
   | Max. process+thread count | 5400 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] linyiqun commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
linyiqun commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284957277
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
 
 Review comment:
   Actually I  mean not to start flushTransactions thread in OMDoubleBuffer. 
It's would  be  better to let the caller of function flushTransactions outside 
of the class OMDoubleBuffer.  How the flush transactions behavior should be 
triggered by outside not by itself I think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] linyiqun commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
linyiqun commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284958539
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount
+   */
+  public long getFlushedTransactionCount() {
+return flushedTransactionCount.get();
+  }
+
+  /**
+   * Returns total number of flush iterations run by sync thread.
+   * @return flushIterations
+   */
+  public long getFlushIterations() {
+return flushIterations.get();
+  }
+
+  /**
+   * Add OmResponseBufferEntry to buffer.
+   * @param response
+   * @param transactionIndex
+   */
+  

[GitHub] [hadoop] mukul1987 commented on a change in pull request #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-16 Thread GitBox
mukul1987 commented on a change in pull request #799: HDDS-1451 : 
SCMBlockManager findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#discussion_r284954644
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 ##
 @@ -182,18 +182,28 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   pipelineManager
   .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
   excludeList.getDatanodes(), excludeList.getPipelineIds());
-  Pipeline pipeline;
+  Pipeline pipeline = null;
   if (availablePipelines.size() == 0) {
 try {
   // TODO: #CLUTIL Remove creation logic when all replication types and
   // factors are handled by pipeline creator
   pipeline = pipelineManager.createPipeline(type, factor);
 } catch (IOException e) {
-  LOG.error("Pipeline creation failed for type:{} factor:{}",
+  LOG.warn("Pipeline creation failed for type:{} factor:{}",
   type, factor, e);
-  break;
+  LOG.info("Checking one more time for suitable pipelines");
 
 Review comment:
   Lets add a type and factor here as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-16 Thread GitBox
mukul1987 commented on a change in pull request #799: HDDS-1451 : 
SCMBlockManager findPipeline and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#discussion_r284954692
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 ##
 @@ -182,18 +182,28 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   pipelineManager
   .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
   excludeList.getDatanodes(), excludeList.getPipelineIds());
-  Pipeline pipeline;
+  Pipeline pipeline = null;
   if (availablePipelines.size() == 0) {
 try {
   // TODO: #CLUTIL Remove creation logic when all replication types and
   // factors are handled by pipeline creator
   pipeline = pipelineManager.createPipeline(type, factor);
 } catch (IOException e) {
-  LOG.error("Pipeline creation failed for type:{} factor:{}",
+  LOG.warn("Pipeline creation failed for type:{} factor:{}",
   type, factor, e);
-  break;
+  LOG.info("Checking one more time for suitable pipelines");
+  availablePipelines = pipelineManager
+  .getPipelines(type, factor, Pipeline.PipelineState.OPEN,
+  excludeList.getDatanodes(), excludeList.getPipelineIds());
+  if (availablePipelines.size() == 0) {
+LOG.info("Could not find available pipeline even after trying " +
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
hadoop-yetus commented on a change in pull request #827: HDDS-1551. Implement 
Bucket Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#discussion_r284954493
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -313,6 +313,8 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 RPC.setProtocolEngine(configuration, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
 
+metadataManager = new OmMetadataManagerImpl(configuration);
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #827: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493289186
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 421 | trunk passed |
   | +1 | compile | 196 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 877 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | trunk passed |
   | 0 | spotbugs | 241 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 423 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 395 | the patch passed |
   | +1 | compile | 202 | the patch passed |
   | +1 | cc | 202 | the patch passed |
   | +1 | javac | 202 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 60 | hadoop-ozone generated 4 new + 2 unchanged - 0 fixed = 
6 total (was 2) |
   | +1 | findbugs | 456 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 226 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 4726 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerStateMachine |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   |   | hadoop.ozone.om.TestOzoneManagerLock |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/827 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 1ebd0979f400 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/testReport/ |
   | Max. process+thread count | 1187 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-827/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-16 Thread GitBox
ajayydv opened a new pull request #828: HDDS-1538. Update ozone protobuf 
message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #819:  HDDS-1501 : Create a Recon task 
interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493278407
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 382 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 125 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 391 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 63 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 430 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 141 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1151 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7128 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux c15ecb89aaa8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c183bd8 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/testReport/ |
   | Max. process+thread count | 4528 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on 
[HDDS-1499](https://issues.apache.org/jira/browse/HDDS-1499) and 
[HDDS-1512](https://issues.apache.org/jira/browse/HDDS-1512). This PR has 
commits from HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512. This PR has commits from 
HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design and refactor approach, so that 
similar can be followed for other requests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 edited a comment on issue #827: HDDS-1551. Implement Bucket 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   **Note for reviewers:**
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design approach, so that similar can be 
followed for other requests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 commented on issue #827: HDDS-1551. Implement Bucket Write 
Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827#issuecomment-493272980
 
 
   This is dependent on HDDS-1499 and HDDS-1512.
   
   The last commit is part of this Jira. Opened a PR to get a Jenkins run and 
to get initial comments on the class design approach, so that similar can be 
followed for other requests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #827: HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuffer.

2019-05-16 Thread GitBox
bharatviswa504 opened a new pull request #827: HDDS-1551. Implement Bucket 
Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/827
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #819:  HDDS-1501 : Create a Recon task 
interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493262095
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 369 | trunk passed |
   | +1 | compile | 190 | trunk passed |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 759 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 398 | the patch passed |
   | +1 | compile | 208 | the patch passed |
   | +1 | javac | 208 | the patch passed |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 64 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 432 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 134 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1146 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5356 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 53e772f24a15 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/testReport/ |
   | Max. process+thread count | 5401 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #819:  HDDS-1501 : Create a Recon task 
interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493261366
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 527 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 400 | trunk passed |
   | +1 | compile | 198 | trunk passed |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 766 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 421 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 389 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 665 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 64 | hadoop-ozone generated 3 new + 2 unchanged - 0 fixed = 
5 total (was 2) |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 136 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1121 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5893 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8465f3a9d2d6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/testReport/ |
   | Max. process+thread count | 4715 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
avijayanhwx commented on issue #819:  HDDS-1501 : Create a Recon task interface 
to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493251510
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao merged pull request #822: HDDS-1527. HDDS Datanode start fails due to datanode.id file read error

2019-05-16 Thread GitBox
xiaoyuyao merged pull request #822: HDDS-1527. HDDS Datanode start fails due to 
datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #822: HDDS-1527. HDDS Datanode start fails due to datanode.id file read error

2019-05-16 Thread GitBox
xiaoyuyao commented on issue #822: HDDS-1527. HDDS Datanode start fails due to 
datanode.id file read error
URL: https://github.com/apache/hadoop/pull/822#issuecomment-493248926
 
 
   +1, Thanks @swagle  for the contribution. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #819:  HDDS-1501 : Create a Recon task 
interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#issuecomment-493248035
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 414 | trunk passed |
   | +1 | compile | 203 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 235 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 412 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 397 | the patch passed |
   | +1 | compile | 209 | the patch passed |
   | +1 | javac | 209 | the patch passed |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 601 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 59 | hadoop-ozone generated 6 new + 2 unchanged - 0 fixed = 
8 total (was 2) |
   | -1 | findbugs | 152 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 124 | hadoop-hdds in the patch failed. |
   | -1 | unit | 99 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 4276 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5fad57693ba1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fab5b80 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/ozone-recon-codegen U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-819/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-16 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841753#comment-16841753
 ] 

Larry McCay commented on HADOOP-16287:
--

[~daryn] - I see no difference between what is provided here and existing 
hadoop.auth cookies for things like webhdfs.

I am assuming that admins are only allowing the use of proxyusers that can be 
trusted.

Here is my +1.

 

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16287-006.patch, 
> HADOOP-16287-007.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
avijayanhwx commented on a change in pull request #819:  HDDS-1501 : Create a 
Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284910869
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  Collection> tasks = new ArrayList<>();
+  for (Map.Entry taskEntry :
+  reconDBUpdateTasks.entrySet()) {
+ReconDBUpdateTask task = taskEntry.getValue();
+tasks.add(() -> task.process(events));
+  }
+
+  List failedTasks = new ArrayList<>();
+  List> results = executorService.invokeAll(tasks);
+  for (Future f : results) {
+String taskName = 

[GitHub] [hadoop] avijayanhwx commented on a change in pull request #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
avijayanhwx commented on a change in pull request #819:  HDDS-1501 : Create a 
Recon task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284910792
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
 ##
 @@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_DEFAULT;
+import static 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys.OZONE_RECON_TASK_THREAD_COUNT_KEY;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.ReconTaskStatus;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Implementation of ReconTaskController.
+ */
+public class ReconTaskControllerImpl implements ReconTaskController {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReconTaskControllerImpl.class);
+
+  private Map reconDBUpdateTasks;
+  private ExecutorService executorService;
+  private int threadCount = 1;
+  private final Semaphore taskSemaphore = new Semaphore(1);
+  private final ReconOMMetadataManager omMetadataManager;
+  private Map taskFailureCounter = new HashMap<>();
+  private static final int TASK_FAILURE_THRESHOLD = 2;
+  private ReconTaskStatusDao reconTaskStatusDao;
+
+  @Inject
+  public ReconTaskControllerImpl(OzoneConfiguration configuration,
+ ReconOMMetadataManager omMetadataManager,
+ Configuration sqlConfiguration) {
+this.omMetadataManager = omMetadataManager;
+reconDBUpdateTasks = new HashMap<>();
+threadCount = configuration.getInt(OZONE_RECON_TASK_THREAD_COUNT_KEY,
+OZONE_RECON_TASK_THREAD_COUNT_DEFAULT);
+executorService = Executors.newFixedThreadPool(threadCount);
+reconTaskStatusDao = new ReconTaskStatusDao(sqlConfiguration);
+  }
+
+  @Override
+  public void registerTask(ReconDBUpdateTask task) {
+String taskName = task.getTaskName();
+LOG.info("Registered task " + taskName + " with controller.");
+
+// Store task in Task Map.
+reconDBUpdateTasks.put(taskName, task);
+// Store Task in Task failure tracker.
+taskFailureCounter.put(taskName, new AtomicInteger(0));
+// Create DB record for the task.
+ReconTaskStatus reconTaskStatusRecord = new ReconTaskStatus(taskName,
+0L, 0L);
+reconTaskStatusDao.insert(reconTaskStatusRecord);
+  }
+
+  /**
+   * For every registered task, we try process step twice and then reprocess
+   * once (if process failed twice) to absorb the events. If a task has failed
+   * reprocess call more than 2 times across events, it is unregistered
+   * (blacklisted).
+   * @param events set of events
+   * @throws InterruptedException
+   */
+  @Override
+  public void consumeOMEvents(OMUpdateEventBatch events)
+  throws InterruptedException {
+
+
+taskSemaphore.acquire();
+
+try {
+  Collection> tasks = new ArrayList<>();
+  for (Map.Entry taskEntry :
+  reconDBUpdateTasks.entrySet()) {
+ReconDBUpdateTask task = taskEntry.getValue();
+tasks.add(() -> task.process(events));
+  }
+
+  List failedTasks = new ArrayList<>();
+  List> results = executorService.invokeAll(tasks);
 
 Review comment:
   We will be using 1 thread per task. Instead of making 

[GitHub] [hadoop] swagle commented on a change in pull request #819: HDDS-1501 : Create a Recon task interface to update internal DB on updates from OM.

2019-05-16 Thread GitBox
swagle commented on a change in pull request #819:  HDDS-1501 : Create a Recon 
task interface to update internal DB on updates from OM.
URL: https://github.com/apache/hadoop/pull/819#discussion_r284898978
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconDBUpdateTask.java
 ##
 @@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import java.util.Collection;
+
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+/**
+ * Abstract class used to denote a Recon task that needs to act on OM DB 
events.
+ */
+public abstract class ReconDBUpdateTask {
+
+  private String taskName;
+
+  protected ReconDBUpdateTask(String taskName) {
+this.taskName = taskName;
+  }
+
+  /**
+   * Return task name.
+   * @return task name
+   */
+  public String getTaskName() {
+return taskName;
+  }
+
+  /**
+   * Return the list of tables that the task is listening on.
+   * Empty list means the task is NOT listening on any tables.
+   * @return Collection of Tables.
+   */
+  protected abstract Collection getTablesListeningOn();
 
 Review comment:
   Weird naming convention. What about getTaskTables()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16248) Fix MutableQuantiles memory leak

2019-05-16 Thread Alexis Daboville (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841678#comment-16841678
 ] 

Alexis Daboville commented on HADOOP-16248:
---

alexis.dabovi...@gmail.com should do. Thanks.

> Fix MutableQuantiles memory leak
> 
>
> Key: HADOOP-16248
> URL: https://issues.apache.org/jira/browse/HADOOP-16248
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Alexis Daboville
>Priority: Major
> Attachments: HADOOP-16248.00.patch, mutable-quantiles-leak.png, 
> mutable-quantiles.patch
>
>
> In some circumstances (high GC, high CPU usage, creating lots of
>  S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall
>  behind processing tasks that are submitted to it; because tasks are
>  submitted on a regular schedule, the unbounded queue backing the
>  {{ExecutorService}} might grow to several gigs [2]. By using
>  {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak 
> won't
>  happen. In order to mitigate the growth, a simple fix [3] is proposed, 
> simply replacing {{scheduler.scheduleAtFixedRate}} by 
> {{scheduler.scheduleWithFixedDelay}}.
> [1] it is single threaded and shared across all instances of 
> {{MutableQuantiles}}: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68]
> [2] see attached mutable-quantiles-leak.png.
> [3] mutable-quantiles.patch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16317) ABFS: improve random read performance

2019-05-16 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-16317:


 Summary: ABFS: improve random read performance
 Key: HADOOP-16317
 URL: https://issues.apache.org/jira/browse/HADOOP-16317
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Da Zhou


Improving random read performance is an interesting topic. ABFS doesn't perform 
well when reading column format files as the process involves with many seek 
operations which make the readAhead no use, and if readAhead is used unwisely 
it would lead to unnecessary data request.

Hence creating this Jira as a reminder to track the investigation and progress 
of the work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284847079
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineFactory.java
 ##
 @@ -61,4 +61,13 @@ public Pipeline create(ReplicationType type, 
ReplicationFactor factor,
   List nodes) {
 return providers.get(type).create(factor, nodes);
   }
+
+  @VisibleForTesting
+  public PipelineProvider getProvider(ReplicationType type) {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-493185963
 
 
   Thank You @lokeshj1703 for the review.
   I have addressed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284844530
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestory.java
 ##
 @@ -97,8 +97,11 @@ public void testPipelineCreationOnNodeRestart() throws 
Exception {
 }
 
 // try creating another pipeline now
+RatisPipelineProvider ratisPipelineProvider = (RatisPipelineProvider)
+pipelineManager.getPipelineFactory().getProvider(
+HddsProtos.ReplicationType.RATIS);
 try {
-  RatisPipelineUtils.createPipeline(pipelines.get(0), conf);
+  ratisPipelineProvider.createPipeline(pipelines.get(0));
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838741
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/MockRatisPipelineProvider.java
 ##
 @@ -37,4 +37,9 @@ public MockRatisPipelineProvider(NodeManager nodeManager,
   protected void initializePipeline(Pipeline pipeline) throws IOException {
 // do nothing as the datanodes do not exists
   }
+
+  @Override
+  public void shutdown() {
+
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838447
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
 ##
 @@ -72,4 +72,9 @@ public Pipeline create(ReplicationFactor factor,
 .setNodes(nodes)
 .build();
   }
+
+  @Override
+  public void shutdown() {
+
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838490
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 ##
 @@ -1017,6 +1017,9 @@ public void stop() {
 } catch (Exception ex) {
   LOG.error("SCM Metadata store stop failed", ex);
 }
+
+// shutdown pipeline provider.
+pipelineManager.getPipelineFactory().shutdown();
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838303
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -346,6 +347,11 @@ public void triggerPipelineCreation() {
 backgroundPipelineCreator.triggerPipelineCreation();
   }
 
+  @Override
+  public PipelineFactory getPipelineFactory() {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284838122
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
+  }
+
   protected void initializePipeline(Pipeline pipeline) throws IOException {
-RatisPipelineUtils.createPipeline(pipeline, conf);
+createPipeline(pipeline);
+  }
+
+  /**
+   * Sends ratis command to create pipeline on all the datanodes.
+   *
+   * @param pipeline  - Pipeline to be created
+   * @throws IOException if creation fails
+   */
+  public void createPipeline(Pipeline pipeline)
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837740
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
 ##
 @@ -75,4 +75,6 @@ void finalizeAndDestroyPipeline(Pipeline pipeline, boolean 
onTimeout)
   void startPipelineCreator();
 
   void triggerPipelineCreation();
+
+  PipelineFactory getPipelineFactory();
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837697
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -24,35 +24,75 @@
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementPolicy;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.ratis.RatisHelper;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.concurrent.RejectedExecutionException;
 import java.util.stream.Collectors;
 
 /**
  * Implements Api for creating ratis pipelines.
  */
 public class RatisPipelineProvider implements PipelineProvider {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisPipelineProvider.class);
+
   private final NodeManager nodeManager;
   private final PipelineStateManager stateManager;
   private final Configuration conf;
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private final int parallelisimForPool = 3;
+
+  private final ForkJoinPool.ForkJoinWorkerThreadFactory factory =
+  (pool -> {
+final ForkJoinWorkerThread worker = ForkJoinPool.
+defaultForkJoinWorkerThreadFactory.newThread(pool);
+worker.setName("ratisCreatePipeline" + worker.getPoolIndex());
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r284837312
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
 
 Review comment:
   This is done based on arpit's comment, as on an unclean shutdown this 
terminate abruptly. So we can use shutdownNow(), instead of awaitTermination in 
normal case too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16268) Allow custom wrapped exception to be thrown by server if RPC call queue is filled up

2019-05-16 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841573#comment-16841573
 ] 

Erik Krogen commented on HADOOP-16268:
--

Hey [~crh], the idea seems good. A few comments:
* You've currently made the change within {{CallQueueManager#throwBackoff()}}, 
but this logic is only used when {{shouldBackOff()}} is true, so it is only 
triggered from the response time based back-off. You'll also need to take a 
look at {{FairCallQueue#add()}}, which triggers backoff based off of the queue 
being full. Ideally we should probably unify this logic.
* I think it would be better if the config was scoped to a certain IPC 
namespace, similar to the other IPC configs, so that you can specify it only 
for certain IPC servers. For example the configs today are like 
{{ipc.8020.callqueue.impl}} so that you can change the impl for only the client 
RPC server.
* I wonder if it would be possible to make this config a bit more general, by 
allowing the user to specify a class to throw on backoff, or specify one of 
{{DISCONNECT}} / {{FAILOVER}} / {{KEEPALIVE}}? Just a thought.

> Allow custom wrapped exception to be thrown by server if RPC call queue is 
> filled up
> 
>
> Key: HADOOP-16268
> URL: https://issues.apache.org/jira/browse/HADOOP-16268
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: CR Hota
>Priority: Major
> Attachments: HADOOP-16268.001.patch
>
>
> In the current implementation of callqueue manager, 
> "CallQueueOverflowException" exceptions are always wrapping 
> "RetriableException". Through configs servers should be allowed to throw 
> custom exceptions based on new use cases.
> In CallQueueManager.java for backoff the below is done 
> {code:java}
>   // ideally this behavior should be controllable too.
>   private void throwBackoff() throws IllegalStateException {
> throw CallQueueOverflowException.DISCONNECT;
>   }
> {code}
> Since CallQueueOverflowException only wraps RetriableException clients would 
> end up hitting the same server for retries. In use cases that router supports 
> these overflowed requests could be handled by another router that shares the 
> same state thus distributing load across a cluster of routers better. In the 
> absence of any custom exception, current behavior should be supported.
> In CallQueueOverflowException class a new Standby exception wrap should be 
> created. Something like the below
> {code:java}
>static final CallQueueOverflowException KEEPALIVE =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY),
> RpcStatusProto.ERROR);
> static final CallQueueOverflowException DISCONNECT =
> new CallQueueOverflowException(
> new RetriableException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> static final CallQueueOverflowException DISCONNECT2 =
> new CallQueueOverflowException(
> new StandbyException(TOO_BUSY + " - disconnecting"),
> RpcStatusProto.FATAL);
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-493140605
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 22 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1159 | trunk passed |
   | +1 | compile | 1075 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 124 | trunk passed |
   | +1 | shadedclient | 1037 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 96 | trunk passed |
   | 0 | spotbugs | 66 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 184 | trunk passed |
   | -0 | patch | 101 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 78 | the patch passed |
   | +1 | compile | 979 | the patch passed |
   | +1 | javac | 979 | the patch passed |
   | -0 | checkstyle | 152 | root: The patch generated 61 new + 85 unchanged - 
2 fixed = 146 total (was 87) |
   | +1 | mvnsite | 120 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 31 | hadoop-tools_hadoop-aws generated 4 new + 1 unchanged 
- 0 fixed = 5 total (was 1) |
   | -1 | findbugs | 74 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 517 | hadoop-common in the patch passed. |
   | +1 | unit | 292 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7204 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostFirst 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
69-89] |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostLast 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
98-109] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 6d224096b2df 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c15b3bc |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/testReport/ |
   | Max. process+thread count | 1498 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/25/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For 

[GitHub] [hadoop] elek commented on issue #814: HDDS-1518. Use /etc/ozone for configuration inside docker-compose

2019-05-16 Thread GitBox
elek commented on issue #814: HDDS-1518. Use /etc/ozone for configuration 
inside docker-compose 
URL: https://github.com/apache/hadoop/pull/814#issuecomment-493138595
 
 
   Thanks @jiwq the review, and also thanks to @arp7 @eyanghwx and @anuengineer 
for review and commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284787939
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount
+   */
+  public long getFlushedTransactionCount() {
+return flushedTransactionCount.get();
+  }
+
+  /**
+   * Returns total number of flush iterations run by sync thread.
+   * @return flushIterations
+   */
+  public long getFlushIterations() {
+return flushIterations.get();
+  }
+
+  /**
+   * Add OmResponseBufferEntry to buffer.
+   * @param response
+   * @param transactionIndex
+   */

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284787939
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount
+   */
+  public long getFlushedTransactionCount() {
+return flushedTransactionCount.get();
+  }
+
+  /**
+   * Returns total number of flush iterations run by sync thread.
+   * @return flushIterations
+   */
+  public long getFlushIterations() {
+return flushIterations.get();
+  }
+
+  /**
+   * Add OmResponseBufferEntry to buffer.
+   * @param response
+   * @param transactionIndex
+   */

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284788571
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
 
 Review comment:
   Your suggestion here is to Make Deamon thread logic in a separate class, and 
instantiate that class in OMDoubleBuffer and start the thread in OMDoubleBuffer?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
bharatviswa504 commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284787939
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount
+   */
+  public long getFlushedTransactionCount() {
+return flushedTransactionCount.get();
+  }
+
+  /**
+   * Returns total number of flush iterations run by sync thread.
+   * @return flushIterations
+   */
+  public long getFlushIterations() {
+return flushIterations.get();
+  }
+
+  /**
+   * Add OmResponseBufferEntry to buffer.
+   * @param response
+   * @param transactionIndex
+   */

[GitHub] [hadoop] linyiqun commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
linyiqun commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284784362
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
 
 Review comment:
   I prefer to separate the daemon thread from the OM double buffer, so that 
this class will be more flexible to use.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] linyiqun commented on a change in pull request #810: HDDS-1512. Implement DoubleBuffer in OzoneManager.

2019-05-16 Thread GitBox
linyiqun commented on a change in pull request #810: HDDS-1512. Implement 
DoubleBuffer in OzoneManager.
URL: https://github.com/apache/hadoop/pull/810#discussion_r284786002
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicLong;
+
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.ratis.helpers.DoubleBufferEntry;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A double-buffer for OM requests.
+ */
+public class OzoneManagerDoubleBuffer {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OzoneManagerDoubleBuffer.class.getName());
+
+  private TransactionBuffer currentBuffer;
+  private TransactionBuffer readyBuffer;
+  private Daemon daemon;
+  private volatile boolean syncToDB;
+  private final OMMetadataManager omMetadataManager;
+  private AtomicLong flushedTransactionCount = new AtomicLong(0);
+  private AtomicLong flushIterations = new AtomicLong(0);
+
+  public OzoneManagerDoubleBuffer(OMMetadataManager omMetadataManager) {
+this.currentBuffer = new TransactionBuffer();
+this.readyBuffer = new TransactionBuffer();
+this.omMetadataManager = omMetadataManager;
+
+// Daemon thread which runs in back ground and flushes transactions to DB.
+daemon = new Daemon(this::flushTransactions);
+daemon.start();
+
+  }
+
+  /**
+   * Runs in a background thread and batches the transaction in currentBuffer
+   * and commit to DB.
+   */
+  private void flushTransactions() {
+while(true) {
+  if (canFlush()) {
+syncToDB = true;
+setReadyBuffer();
+final BatchOperation batchOperation = omMetadataManager.getStore()
+.initBatchOperation();
+
+readyBuffer.iterator().forEachRemaining((entry) -> {
+  try {
+entry.getResponse().addToRocksDBBatch(omMetadataManager,
+batchOperation);
+  } catch (IOException ex) {
+// During Adding to RocksDB batch entry got an exception.
+// We should terminate the OM.
+String message = "During flush to DB encountered error " +
+ex.getMessage();
+ExitUtil.terminate(1, message);
+  }
+});
+
+try {
+  omMetadataManager.getStore().commitBatchOperation(batchOperation);
+} catch (IOException ex) {
+  // During flush to rocksdb got an exception.
+  // We should terminate the OM.
+  String message = "During flush to DB encountered error " +
+  ex.getMessage();
+  ExitUtil.terminate(1, message);
+}
+
+int flushedTransactionsSize = readyBuffer.size();
+flushedTransactionCount.addAndGet(flushedTransactionsSize);
+flushIterations.incrementAndGet();
+
+LOG.info("Sync Iteration {} flushed transactions in this iteration{}",
+flushIterations.get(), flushedTransactionsSize);
+readyBuffer.clear();
+syncToDB = false;
+// TODO: update the last updated index in OzoneManagerStateMachine.
+  }
+}
+  }
+
+  /**
+   * Returns the flushed transaction count to OM DB.
+   * @return flushedTransactionCount
+   */
+  public long getFlushedTransactionCount() {
+return flushedTransactionCount.get();
+  }
+
+  /**
+   * Returns total number of flush iterations run by sync thread.
+   * @return flushIterations
+   */
+  public long getFlushIterations() {
+return flushIterations.get();
+  }
+
+  /**
+   * Add OmResponseBufferEntry to buffer.
+   * @param response
+   * @param transactionIndex
+   */
+  

[GitHub] [hadoop] hadoop-yetus commented on issue #806: HDDS-1224. Restructure code to validate the response from server in the Read path

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #806: HDDS-1224. Restructure code to validate 
the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806#issuecomment-493125985
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 428 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 831 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 129 | trunk passed |
   | 0 | spotbugs | 234 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 413 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 396 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | -0 | checkstyle | 30 | hadoop-hdds: The patch generated 39 new + 0 
unchanged - 0 fixed = 39 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 113 | the patch passed |
   | -1 | findbugs | 195 | hadoop-hdds generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 153 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1266 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7203 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdds |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.chunkIndex; locked 91% of 
time  Unsynchronized access at BlockInputStream.java:91% of time  
Unsynchronized access at BlockInputStream.java:[line 366] |
   | Failed junit tests | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/806 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9641a001c81b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c15b3bc |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/artifact/out/new-findbugs-hadoop-hdds.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/testReport/ |
   | Max. process+thread count | 5402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-806/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Resolved] (HADOOP-16050) s3a SSL connections should use OpenSSL

2019-05-16 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar resolved HADOOP-16050.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

Closing. Thanks for the reviews everyone!

> s3a SSL connections should use OpenSSL
> --
>
> Key: HADOOP-16050
> URL: https://issues.apache.org/jira/browse/HADOOP-16050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.1
>Reporter: Justin Uang
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-01-17 at 2.57.06 PM.png
>
>
> We have found that when running the S3AFileSystem, it picks GCM as the ssl 
> cipher suite. Unfortunately this is well known to be slow on java 8: 
> [https://stackoverflow.com/questions/25992131/slow-aes-gcm-encryption-and-decryption-with-java-8u20.]
>  
> In practice we have seen that it can take well over 50% of our CPU time in 
> spark workflows. We should add an option to set the list of cipher suites we 
> would like to use. !Screen Shot 2019-01-17 at 2.57.06 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2019-05-16 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar reassigned HADOOP-13600:
-

Assignee: (was: Sahil Takiar)

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2019-05-16 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841467#comment-16841467
 ] 

Sahil Takiar commented on HADOOP-13600:
---

No longer working on this, so marking as unassigned.

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #826: HDDS-1517. AllocateBlock call fails with ContainerNotFoundException.

2019-05-16 Thread GitBox
jiwq commented on a change in pull request #826: HDDS-1517. AllocateBlock call 
fails with ContainerNotFoundException.
URL: https://github.com/apache/hadoop/pull/826#discussion_r284761144
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -43,17 +46,15 @@
 
 import java.io.File;
 import java.io.IOException;
-import java.util.Iterator;
-import java.util.Optional;
-import java.util.Random;
-import java.util.Set;
-import java.util.TreeSet;
-import java.util.UUID;
-import java.util.concurrent.TimeUnit;
+import java.util.*;
+import java.util.concurrent.*;
 
 Review comment:
   Should better don't use `*` in import statement.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #826: HDDS-1517. AllocateBlock call fails with ContainerNotFoundException.

2019-05-16 Thread GitBox
jiwq commented on a change in pull request #826: HDDS-1517. AllocateBlock call 
fails with ContainerNotFoundException.
URL: https://github.com/apache/hadoop/pull/826#discussion_r284762022
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestSCMContainerManager.java
 ##
 @@ -144,6 +145,41 @@ public void testallocateContainerDistributesAllocation() 
throws Exception {
 Assert.assertTrue(pipelineList.size() > 5);
   }
 
+  @Test
+  public void testAllocateContainerInParallel() throws Exception {
+int threadCount = 20;
+List executors = new ArrayList<>(threadCount);
+for (int i = 0; i < threadCount; i++) {
+  executors.add(Executors.newSingleThreadExecutor());
+}
+List> futureList =
+new ArrayList<>(threadCount);
+for (int i = 0; i < threadCount; i++) {
+  final CompletableFuture future =
+  new CompletableFuture<>();
+  CompletableFuture.supplyAsync(() -> {
+try {
+  ContainerInfo containerInfo = containerManager
+  .allocateContainer(xceiverClientManager.getType(),
+  xceiverClientManager.getFactor(), containerOwner);
+
+  Assert.assertNotNull(containerInfo);
+  Assert.assertNotNull(containerInfo.getPipelineID());
+  future.complete(containerInfo);
+  return containerInfo;
+} catch (IOException e) {
+  future.completeExceptionally(e);
+} return future;
+  }, executors.get(i)); futureList.add(future);
+}
+try {
+  CompletableFuture
+  .allOf(futureList.toArray(new CompletableFuture[futureList.size()]))
+  .get();
+} catch (Exception e) {
+  Assert.fail("testAllocateBlockInParallel failed");
+}
+  }
 
 Review comment:
   Need a blank line between two methods.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #733: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-05-16 Thread GitBox
elek closed pull request #733: HDDS-1284. Adjust default values of pipline 
recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/733
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #733: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-05-16 Thread GitBox
elek commented on issue #733: HDDS-1284. Adjust default values of pipline 
recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/733#issuecomment-493112805
 
 
   Just merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #754: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-16 Thread GitBox
anuengineer commented on a change in pull request #754: HDDS-1065. OM and DN 
should persist SCM certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/754#discussion_r284760860
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
 ##
 @@ -135,10 +135,11 @@ boolean verifySignature(byte[] data, byte[] signature,
*
* @param pemEncodedCert- pem encoded X509 Certificate
* @param force - override any existing file
+   * @param caCert- Is CA certificate.
* @throws CertificateException - on Error.
*
*/
-  void storeCertificate(String pemEncodedCert, boolean force)
+  void storeCertificate(String pemEncodedCert, boolean force, boolean caCert)
 
 Review comment:
   Why don't you write a new function, called storeRootCertificate. That avoids 
adding this false argument to lots of other parts of the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16050) s3a SSL connections should use OpenSSL

2019-05-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841447#comment-16841447
 ] 

Hudson commented on HADOOP-16050:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16563 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16563/])
HADOOP-16050: s3a SSL connections should use OpenSSL (mackrorysd: rev 
b067f8acaa79b1230336900a5c62ba465b2adb28)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestOpenSSLSocketFactory.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ASSL.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/OpenSSLSocketFactory.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) hadoop-tools/hadoop-azure/pom.xml
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsConfigurationFieldsValidation.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (delete) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/SSLSocketFactoryEx.java
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java


> s3a SSL connections should use OpenSSL
> --
>
> Key: HADOOP-16050
> URL: https://issues.apache.org/jira/browse/HADOOP-16050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.1
>Reporter: Justin Uang
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: Screen Shot 2019-01-17 at 2.57.06 PM.png
>
>
> We have found that when running the S3AFileSystem, it picks GCM as the ssl 
> cipher suite. Unfortunately this is well known to be slow on java 8: 
> [https://stackoverflow.com/questions/25992131/slow-aes-gcm-encryption-and-decryption-with-java-8u20.]
>  
> In practice we have seen that it can take well over 50% of our CPU time in 
> spark workflows. We should add an option to set the list of cipher suites we 
> would like to use. !Screen Shot 2019-01-17 at 2.57.06 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mackrorysd commented on issue #784: HADOOP-16050: s3a SSL connections should use OpenSSL

2019-05-16 Thread GitBox
mackrorysd commented on issue #784: HADOOP-16050: s3a SSL connections should 
use OpenSSL
URL: https://github.com/apache/hadoop/pull/784#issuecomment-493104043
 
 
   +1 from me too, tests ran well. I've pushed this by cherry-picking from your 
remote. Github seems to think I don't have write permissions again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer closed pull request #814: HDDS-1518. Use /etc/ozone for configuration inside docker-compose

2019-05-16 Thread GitBox
anuengineer closed pull request #814: HDDS-1518. Use /etc/ozone for 
configuration inside docker-compose 
URL: https://github.com/apache/hadoop/pull/814
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #815: HDDS-1522. Provide intellij runConfiguration for Ozone components

2019-05-16 Thread GitBox
anuengineer merged pull request #815: HDDS-1522. Provide intellij 
runConfiguration for Ozone components
URL: https://github.com/apache/hadoop/pull/815
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #815: HDDS-1522. Provide intellij runConfiguration for Ozone components

2019-05-16 Thread GitBox
anuengineer commented on issue #815: HDDS-1522. Provide intellij 
runConfiguration for Ozone components
URL: https://github.com/apache/hadoop/pull/815#issuecomment-493095153
 
 
   +1, LGTM. Merging now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284734055
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 ##
 @@ -1213,8 +1213,12 @@
 
 
   fs.s3a.connection.maximum
-  15
-  Controls the maximum number of simultaneous connections to 
S3.
+  48
 
 Review comment:
   moved to 72. Now, 64 threads *seems* a lot, but most of these are blocking 
operations rather than IO heavy actions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284735973
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
 ##
 @@ -1215,6 +1215,18 @@ sync.
 
 See [Fail on Error](#fail-on-error) for more detail.
 
+### Error `Attempt to change a resource which is still in use: Table is being 
deleted`
+
+```
+com.amazonaws.services.dynamodbv2.model.ResourceInUseException:
+  Attempt to change a resource which is still in use: Table is being deleted: s
+3guard.test.testDynamoDBInitDestroy351245027 
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284734556
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -138,9 +138,15 @@ private Constants() {
   public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT =
   SimpleAWSCredentialsProvider.NAME;
 
+
+  // the maximum number of tasks cached if all threads are already uploading
+  public static final String MAX_TOTAL_TASKS = "fs.s3a.max.total.tasks";
+
+  public static final int DEFAULT_MAX_TOTAL_TASKS = 5;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16294) Enable access to context by DistCp subclasses

2019-05-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841414#comment-16841414
 ] 

Hudson commented on HADOOP-16294:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16561 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16561/])
HADOOP-16294: Enable access to input options by DistCp subclasses. (stevel: rev 
c15b3bca86a0f973cc020f3ff2d5767ff1bd)
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


> Enable access to context by DistCp subclasses
> -
>
> Key: HADOOP-16294
> URL: https://issues.apache.org/jira/browse/HADOOP-16294
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Trivial
> Fix For: 3.2.1
>
>
> In the DistCp class, the context is private with no getter method allowing 
> retrieval by subclasses. So a subclass would need to save its own copy of the 
> inputOptions supplied to its constructor and reconstruct the context if it 
> wishes to override the createInputFileListing method with logic similar to 
> the original implementation, i.e. calling CopyListing#buildListing with a 
> path and context.
> I propose adding to DistCp this method,
> {noformat}
>   protected DistCpContext getContext() {
> return context;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284736917
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFailureHandling.java
 ##
 @@ -87,12 +100,86 @@ public void testMultiObjectDeleteSomeFiles() throws 
Throwable {
 timer.end("removeKeys");
   }
 
+
+  private Path maybeGetCsvPath() {
+Configuration conf = getConfiguration();
+String csvFile = conf.getTrimmed(KEY_CSVTEST_FILE, DEFAULT_CSVTEST_FILE);
+Assume.assumeTrue("CSV test file is not the default",
+DEFAULT_CSVTEST_FILE.equals(csvFile));
+return new Path(csvFile);
+  }
+
+  /**
+   * Test low-level failure handling with low level delete request.
+   */
   @Test
   public void testMultiObjectDeleteNoPermissions() throws Throwable {
-Path testFile = getLandsatCSVPath(getConfiguration());
-S3AFileSystem fs = (S3AFileSystem)testFile.getFileSystem(
+describe("Delete the landsat CSV file and expect it to fail");
+Path csvPath = maybeGetCsvPath();
+S3AFileSystem fs = 
(S3AFileSystem)csvPath.getFileSystem(getConfiguration());
+List keys
+= buildDeleteRequest(
+new String[]{
+fs.pathToKey(csvPath),
+"missing-key.csv"
+});
+MultiObjectDeleteException ex = intercept(
+MultiObjectDeleteException.class,
+() -> fs.removeKeys(keys, false, false));
+
+final List undeleted
+= extractUndeletedPaths(ex, fs::keyToQualifiedPath);
+String undeletedFiles = join(undeleted);
+failIf(undeleted.size() != 2,
+"undeleted list size wrong: " + undeletedFiles,
+ex);
+assertTrue("no CSV in " +undeletedFiles, undeleted.contains(csvPath));
+
+// and a full split, after adding a new key
+String marker = "/marker";
+Path markerPath = fs.keyToQualifiedPath(marker);
+keys.add(new DeleteObjectsRequest.KeyVersion(marker));
+
+Pair, List> pair =
+new MultiObjectDeleteSupport(fs.createStoreContext())
+.splitUndeletedKeys(ex, keys);
+assertEquals(undeleted, pair.getLeft());
+List right = pair.getRight();
+assertEquals("Wrong size for " + join(right), 1, right.size());
+assertEquals(markerPath, right.get(0));
+  }
+
+  /**
+   * See what happens when you delete two entries which do not exist.
+   * The call succeeds; if
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284734722
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -138,9 +138,15 @@ private Constants() {
   public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT =
   SimpleAWSCredentialsProvider.NAME;
 
+
+  // the maximum number of tasks cached if all threads are already uploading
+  public static final String MAX_TOTAL_TASKS = "fs.s3a.max.total.tasks";
+
+  public static final int DEFAULT_MAX_TOTAL_TASKS = 5;
+
   // number of simultaneous connections to s3
   public static final String MAXIMUM_CONNECTIONS = "fs.s3a.connection.maximum";
-  public static final int DEFAULT_MAXIMUM_CONNECTIONS = 15;
+  public static final int DEFAULT_MAXIMUM_CONNECTIONS = 
DEFAULT_MAX_TOTAL_TASKS * 2;
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #654: HADOOP-15183 S3Guard 
store becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#discussion_r284734924
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -283,6 +285,22 @@ private Constants() {
   @InterfaceStability.Unstable
   public static final int DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS = 4;
 
+  /**
+   * The capacity of executor queues for operations other than block
+   * upload, where {@link #FAST_UPLOAD_ACTIVE_BLOCKS} is used instead.
+   * This should be less than {@link #MAX_THREADS} for fair
+   * submission.
+   * Value: {@value}.
+   */
+  public static final String EXECUTOR_CAPACITY = "fs.s3a.executor.capacity";
+
+  /**
+   * The capacity of executor queues for operations other than block
+   * upload, where {@link #FAST_UPLOAD_ACTIVE_BLOCKS} is used instead.
+   * Value: {@value}
+   */
+  public static final int DEFAULT_EXECUTOR_CAPACITY = 10;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16307) Intern User Name and Group Name in FileStatus

2019-05-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841395#comment-16841395
 ] 

Hudson commented on HADOOP-16307:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16560 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16560/])
HADOOP-16307. Intern User Name and Group Name in FileStatus. (stevel: rev 
2713dcf6e9ef308ffe6102532c90b27c52d27f7c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/PBHelper.java


> Intern User Name and Group Name in FileStatus
> -
>
> Key: HADOOP-16307
> URL: https://issues.apache.org/jira/browse/HADOOP-16307
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.2.1, 3.1.3
>
> Attachments: HADOOP-16307.1.patch, HADOOP-16307.2.patch
>
>
> Client is going OOM with large directory listing 
> {{ClientNamenodeProtocolTranslatorPB#getListing}}
> I captured a memory dump and noted that each and every {{LocatedFileStatus}} 
> has its own copy of the user name and group name.  It is sucking down a lot 
> of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-16 Thread GitBox
steveloughran commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-493085118
 
 
   +1
   
   committed to branch-3.2+, can go earlier if you want


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16207) Fix ITestDirectoryCommitMRJob.testMRJob

2019-05-16 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841394#comment-16841394
 ] 

Steve Loughran commented on HADOOP-16207:
-

suspecting a race condition in >1 test. If we isolate paths this should go away

> Fix ITestDirectoryCommitMRJob.testMRJob
> ---
>
> Key: HADOOP-16207
> URL: https://issues.apache.org/jira/browse/HADOOP-16207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Reported failure of {{ITestDirectoryCommitMRJob}} in validation runs of 
> HADOOP-16186; assertIsDirectory with s3guard enabled and a parallel test run: 
> Path "is recorded as deleted by S3Guard"
> {code}
> waitForConsistency();
> assertIsDirectory(outputPath) /* here */
> {code}
> The file is there but there's a tombstone. Possibilities
> * some race condition with another test
> * tombstones aren't timing out
> * committers aren't creating that base dir in a way which cleans up S3Guard's 
> tombstones. 
> Remember: we do have to delete that dest dir before the committer runs unless 
> overwrite==true, so at the start of the run there will be a tombstone. It 
> should be overwritten by a success.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16294) Enable access to context by DistCp subclasses

2019-05-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16294:

   Resolution: Fixed
Fix Version/s: 3.2.1
   Status: Resolved  (was: Patch Available)

+1, committed to branch-3.2 & trunk

thanks!

> Enable access to context by DistCp subclasses
> -
>
> Key: HADOOP-16294
> URL: https://issues.apache.org/jira/browse/HADOOP-16294
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Trivial
> Fix For: 3.2.1
>
>
> In the DistCp class, the context is private with no getter method allowing 
> retrieval by subclasses. So a subclass would need to save its own copy of the 
> inputOptions supplied to its constructor and reconstruct the context if it 
> wishes to override the createInputFileListing method with logic similar to 
> the original implementation, i.e. calling CopyListing#buildListing with a 
> path and context.
> I propose adding to DistCp this method,
> {noformat}
>   protected DistCpContext getContext() {
> return context;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-16 Thread GitBox
steveloughran closed pull request #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #825: HDDS-1449. JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #825: HDDS-1449. JVM Exit in datanode while 
committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825#issuecomment-493084053
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 9 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 405 | trunk passed |
   | +1 | compile | 199 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 812 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 112 | trunk passed |
   | 0 | spotbugs | 242 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 426 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 388 | the patch passed |
   | +1 | compile | 194 | the patch passed |
   | +1 | javac | 194 | the patch passed |
   | -0 | checkstyle | 26 | hadoop-hdds: The patch generated 12 new + 0 
unchanged - 0 fixed = 12 total (was 0) |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 6 new + 0 
unchanged - 0 fixed = 6 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 623 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | the patch passed |
   | +1 | findbugs | 423 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 144 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1435 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5681 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/825 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 806581a933c1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de01422 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/testReport/ |
   | Max. process+thread count | 4695 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-825/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #824: HADOOP-16085: S3Guard versioning: get ITestS3ARemoteFileChanged to work consistently

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #824: HADOOP-16085: S3Guard versioning: get 
ITestS3ARemoteFileChanged to work consistently
URL: https://github.com/apache/hadoop/pull/824#issuecomment-493082472
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1079 | trunk passed |
   | +1 | compile | 1086 | trunk passed |
   | +1 | checkstyle | 138 | trunk passed |
   | +1 | mvnsite | 129 | trunk passed |
   | +1 | shadedclient | 1027 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 97 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 197 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 90 | the patch passed |
   | +1 | compile | 1210 | the patch passed |
   | +1 | javac | 1210 | the patch passed |
   | -0 | checkstyle | 141 | root: The patch generated 37 new + 70 unchanged - 
4 fixed = 107 total (was 74) |
   | +1 | mvnsite | 125 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 708 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 94 | the patch passed |
   | +1 | findbugs | 207 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 579 | hadoop-common in the patch passed. |
   | +1 | unit | 302 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 7379 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-824/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/824 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux eb106d1c9d2e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de01422 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-824/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-824/1/testReport/ |
   | Max. process+thread count | 1346 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-824/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-16 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-493081737
 
 
   > @bgaborg and I worked out that the reason your tests were working but our 
test runs failing were because you'd explicitly set authoritative = false in 
your config files, whereas we were going with the default, and for hadoop-aws 
test runs, auth is automatically set to true (we should look at that elsewhere).
   
   Hmmm, strange.  I haven't actually set anything about auth mode in my 
config.  I'll have to look at this again.  Unfortunately I have some other 
things going on here that are probably going to require most of my attention 
today.  I will get back to this as quickly as I can.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16307) Intern User Name and Group Name in FileStatus

2019-05-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16307:

   Resolution: Fixed
Fix Version/s: (was: 3.3.0)
   3.1.3
   3.2.1
   Status: Resolved  (was: Patch Available)

+1

fixed in branch-3.1. 3.2 & trunk.

thanks

> Intern User Name and Group Name in FileStatus
> -
>
> Key: HADOOP-16307
> URL: https://issues.apache.org/jira/browse/HADOOP-16307
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Fix For: 3.2.1, 3.1.3
>
> Attachments: HADOOP-16307.1.patch, HADOOP-16307.2.patch
>
>
> Client is going OOM with large directory listing 
> {{ClientNamenodeProtocolTranslatorPB#getListing}}
> I captured a memory dump and noted that each and every {{LocatedFileStatus}} 
> has its own copy of the user name and group name.  It is sucking down a lot 
> of memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-16 Thread GitBox
steveloughran commented on a change in pull request #794: HADOOP-16085: use 
object version or etags to protect against inconsistent read after 
replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#discussion_r284720609
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
 ##
 @@ -390,8 +392,18 @@ public S3AFileStatus next() throws IOException {
 status = statusBatchIterator.next();
 // We remove from provided list the file status listed by S3 so that
 // this does not return duplicate items.
-if (providedStatus.remove(status)) {
-  LOG.debug("Removed the status from provided file status {}", status);
+
+// The provided status is returned as it is assumed to have the better
+// metadata (i.e. the eTag and versionId from S3Guard)
+Optional provided =
 
 Review comment:
   why not just say 
   ```
   status2 = providedStatus.remove(status);
   if (status2 != null) {
return status;
   }
   
   That way: no need to scan the list looking for a entry when the remove 
operation will be doing it anyway?
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #806: HDDS-1224. Restructure code to validate the response from server in the Read path

2019-05-16 Thread GitBox
bshashikant opened a new pull request #806: HDDS-1224. Restructure code to 
validate the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806
 
 
   .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #806: HDDS-1224. Restructure code to validate the response from server in the Read path

2019-05-16 Thread GitBox
bshashikant closed pull request #806: HDDS-1224. Restructure code to validate 
the response from server in the Read path
URL: https://github.com/apache/hadoop/pull/806
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #826: HDDS-1517. AllocateBlock call fails with ContainerNotFoundException.

2019-05-16 Thread GitBox
bshashikant opened a new pull request #826: HDDS-1517. AllocateBlock call fails 
with ContainerNotFoundException.
URL: https://github.com/apache/hadoop/pull/826
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #805: HDDS-1509. TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently

2019-05-16 Thread GitBox
mukul1987 commented on a change in pull request #805: HDDS-1509. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure fails intermittently
URL: https://github.com/apache/hadoop/pull/805#discussion_r284697864
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyOutputStream.java
 ##
 @@ -259,15 +261,23 @@ private void handleException(BlockOutputStreamEntry 
streamEntry,
 if (!retryFailure) {
   closedContainerException = checkIfContainerIsClosed(t);
 }
-PipelineID pipelineId = null;
+Pipeline pipeline = streamEntry.getPipeline();
+PipelineID pipelineId = streamEntry.getPipeline().getId();
 long totalSuccessfulFlushedData = streamEntry.getTotalAckDataLength();
 //set the correct length for the current stream
 streamEntry.setCurrentPosition(totalSuccessfulFlushedData);
 long bufferedDataLen = blockOutputStreamEntryPool.computeBufferData();
-LOG.debug(
-"Encountered exception {}. The last committed block length is {}, "
-+ "uncommitted data length is {} retry count {}", exception,
-totalSuccessfulFlushedData, bufferedDataLen, retryCount);
+if (closedContainerException) {
+  LOG.debug(
+  "Encountered exception {}. The last committed block length is {}, "
+  + "uncommitted data length is {} retry count {}", exception,
+  totalSuccessfulFlushedData, bufferedDataLen, retryCount);
+} else {
+  LOG.info(
 
 Review comment:
   lets change this to WARN


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #820: HDDS-1531. Disable the sync flag by default during chunk writes in Datanode.

2019-05-16 Thread GitBox
bshashikant commented on issue #820: HDDS-1531. Disable the sync flag by 
default during chunk writes in Datanode.
URL: https://github.com/apache/hadoop/pull/820#issuecomment-493061239
 
 
   Thanks @mukul1987 for the review. I have committed this change to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant closed pull request #820: HDDS-1531. Disable the sync flag by default during chunk writes in Datanode.

2019-05-16 Thread GitBox
bshashikant closed pull request #820: HDDS-1531. Disable the sync flag by 
default during chunk writes in Datanode.
URL: https://github.com/apache/hadoop/pull/820
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #814: HDDS-1518. Use /etc/ozone for configuration inside docker-compose

2019-05-16 Thread GitBox
anuengineer commented on a change in pull request #814: HDDS-1518. Use 
/etc/ozone for configuration inside docker-compose 
URL: https://github.com/apache/hadoop/pull/814#discussion_r284687801
 
 

 ##
 File path: Dockerfile
 ##
 @@ -36,7 +36,9 @@ RUN chown hadoop /opt
 ADD scripts /opt/
 ADD scripts/krb5.conf /etc/
 RUN yum install -y krb5-workstation
-
+RUN mkdir -p /etc/hadoop && mkdir -p /var/log/hadoop && chmod 777 /etc/hadoop 
&& chmod 777 /var/log/hadoop
+ENV HADOOP_LOG_DIR=/var/log/hadoop
+ENV HADOOP_CONF_DIR=/etc/hadoop
 
 Review comment:
   +1, I will commit this now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on a change in pull request #814: HDDS-1518. Use /etc/ozone for configuration inside docker-compose

2019-05-16 Thread GitBox
elek commented on a change in pull request #814: HDDS-1518. Use /etc/ozone for 
configuration inside docker-compose 
URL: https://github.com/apache/hadoop/pull/814#discussion_r284684940
 
 

 ##
 File path: Dockerfile
 ##
 @@ -36,7 +36,9 @@ RUN chown hadoop /opt
 ADD scripts /opt/
 ADD scripts/krb5.conf /etc/
 RUN yum install -y krb5-workstation
-
+RUN mkdir -p /etc/hadoop && mkdir -p /var/log/hadoop && chmod 777 /etc/hadoop 
&& chmod 777 /var/log/hadoop
+ENV HADOOP_LOG_DIR=/var/log/hadoop
+ENV HADOOP_CONF_DIR=/etc/hadoop
 
 Review comment:
   Thanks the comment @eyanghwx. I added the sticky bit in the latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 opened a new pull request #825: HDDS-1449. JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.

2019-05-16 Thread GitBox
mukul1987 opened a new pull request #825: HDDS-1449. JVM Exit in datanode while 
committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825
 
 
   https://issues.apache.org/jira/browse/HDDS-1449
   
   The JVM exit was happening because the db instance was being closed while 
the other thread was adding a key to the db.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841112#comment-16841112
 ] 

Gabor Bota edited comment on HADOOP-16279 at 5/16/19 12:21 PM:
---

{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we should prune with 
last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.



was (Author: gabor.bota):
{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we could even go with merging 
the two list internally: so we would prune with last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.


> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not 

[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-16 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841271#comment-16841271
 ] 

Steve Loughran commented on HADOOP-16279:
-

+1 for last updated; it must always be > mod time *except for the special case 
where the client doing the update is in the wrong part of the universe". If we 
get the response from any PUT request, we may actually get a timestamp from 
there we could use? Doesn't work for delete though, does it?

the MetadataStore interface is private+evolving; I'm doing big changes to it 
for HADOOP-15183 to allow put operations to be part of an aggregate (short 
lived) bulk operation. If an enum is added to the prune command, I'm happy 
there.

* we shouldn't support pruning file entries if the client is in auth mode
* but we can allow pruning tombstone markers

note: if someone in non-auth mode does a prune of old files, auth mode clients 
will not see files. We need to make sure all auth-mode docs call this out. 

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841112#comment-16841112
 ] 

Gabor Bota edited comment on HADOOP-16279 at 5/16/19 12:20 PM:
---

{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we could even go with merging 
the two list internally: so we would prune with last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.



was (Author: gabor.bota):
{quote}You might be able to resolve HADOOP-14468 then.{quote}
I just commented on the issue. I will resolve it soon if no one has any further 
comments on it.

{quote}
AF> why we need more prune() functions added to the MS interface
GB> That prune is for removing expired entries from the ddbms. It uses 
last_updated for expiry rather than mod_time.
AF>  It seems like an internal implementation detail that doesn't need to be 
exposed.
{quote}
True. It is an internal impl question. I think we could even go with merging 
the two list internally: so we would prune with mod_time and last_updated.

{quote}
AF> Can we claim that last_updated (metastore write time) >= mod_time?
{quote}
Sure we can. Whenever we access a file's metadata (eg. do a HEAD or a GET) and 
the file already existed on S3, the {{last_updated}} field will be updated to 
the current time, but the mod_time will be what's in the file descriptor in S3. 
This is a very important detail, this is the reason why we use a different 
field in the first place for TTL in auth dirs. It tells how fresh the metadata 
is.

{quote}
AF> smarter logic that allows you set a policy for handling S3 versus MS 
conflicts
GB> So basically what you mean is to add a conflict resolution algorithm when 
an entry is expired?
AF> Not so much when entry is expired, but when data from S3 conflicts with 
data from MS. For example, MS has tombstone but S3 says file exists.
{quote}
I would say this is out of scope for this issue. We would like to solve only 
the metadata expiry with this, and not add policies for conflict resolution.


> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, 

[GitHub] [hadoop] hadoop-yetus commented on issue #782: HDDS-1461. Optimize listStatus api in OzoneFileSystem

2019-05-16 Thread GitBox
hadoop-yetus commented on issue #782: HDDS-1461. Optimize listStatus api in 
OzoneFileSystem
URL: https://github.com/apache/hadoop/pull/782#issuecomment-493043382
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 525 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 405 | trunk passed |
   | +1 | compile | 207 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 236 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 417 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 398 | the patch passed |
   | +1 | compile | 212 | the patch passed |
   | +1 | cc | 212 | the patch passed |
   | +1 | javac | 212 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 663 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 123 | the patch passed |
   | +1 | findbugs | 433 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 150 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1309 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7626 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/782 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4cd11d86f21c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de01422 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/testReport/ |
   | Max. process+thread count | 4646 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/ozone-manager hadoop-ozone/ozonefs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-782/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >