[GitHub] [hadoop] ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265856195
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -131,34 +187,72 @@ public PublicKey getPublicKey() {
   }
 
   /**
-   * Returns the certificate  of the specified component if it exists on the
-   * local system.
+   * Returns the default certificate of given client if it exists.
*
* @return certificate or Null if there is no data.
*/
   @Override
   public X509Certificate getCertificate() {
-if(x509Certificate != null){
+if (x509Certificate != null) {
   return x509Certificate;
 }
 
-Path certPath = securityConfig.getCertificateLocation();
-if (OzoneSecurityUtil.checkIfFileExist(certPath,
-securityConfig.getCertificateFileName())) {
-  CertificateCodec certificateCodec =
-  new CertificateCodec(securityConfig);
-  try {
-X509CertificateHolder x509CertificateHolder =
-certificateCodec.readCertificate();
-x509Certificate =
-CertificateCodec.getX509Certificate(x509CertificateHolder);
-  } catch (java.security.cert.CertificateException | IOException e) {
-getLogger().error("Error reading certificate.", e);
-  }
+if (certSerialId == null) {
+  getLogger().error("Default certificate serial id is not set. Can't " +
+  "locate the default certificate for this client.");
+  return null;
+}
+// Refresh the cache from file system.
+loadAllCertificates();
 
 Review comment:
   We can assert in constructor if certSerialId is not null than its 
corresponding certificate should not be null as well after loading from memory 
but if we fail than it will become catch 22 for some recovery scenarios. (Ex we 
can't even create instance of CertificateClient to call init which may handle 
some of the automatic recovery in future)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265856213
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/LongCodec.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.utils.db;
+
+import com.google.common.primitives.Longs;
+
+import java.io.IOException;
+
+/**
+ * Codec to convert String to/from byte array.
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265855876
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -131,34 +187,72 @@ public PublicKey getPublicKey() {
   }
 
   /**
-   * Returns the certificate  of the specified component if it exists on the
-   * local system.
+   * Returns the default certificate of given client if it exists.
*
* @return certificate or Null if there is no data.
*/
   @Override
   public X509Certificate getCertificate() {
-if(x509Certificate != null){
+if (x509Certificate != null) {
   return x509Certificate;
 }
 
-Path certPath = securityConfig.getCertificateLocation();
-if (OzoneSecurityUtil.checkIfFileExist(certPath,
-securityConfig.getCertificateFileName())) {
-  CertificateCodec certificateCodec =
-  new CertificateCodec(securityConfig);
-  try {
-X509CertificateHolder x509CertificateHolder =
-certificateCodec.readCertificate();
-x509Certificate =
-CertificateCodec.getX509Certificate(x509CertificateHolder);
-  } catch (java.security.cert.CertificateException | IOException e) {
-getLogger().error("Error reading certificate.", e);
-  }
+if (certSerialId == null) {
+  getLogger().error("Default certificate serial id is not set. Can't " +
+  "locate the default certificate for this client.");
+  return null;
+}
+// Refresh the cache from file system.
+loadAllCertificates();
 
 Review comment:
   initialized local certificate during initial call to loadAllCertificates. 
Now if it is null at L202 than it is also not present in map, so loading it 
again from filesystem and checking it again is the only option.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
ajayydv commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265855638
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -349,29 +441,39 @@ public X509Certificate queryCertificate(String query) {
   }
 
   /**
-   * Stores the Certificate  for this client. Don't use this api to add
-   * trusted certificates of other components.
+   * Stores the Certificate  for this client. Don't use this api to add trusted
+   * certificates of others.
*
-   * @param certificate - X509 Certificate
+   * @param pemEncodedCert - pem encoded X509 Certificate
+   * @param force - override any existing file
* @throws CertificateException - on Error.
+   *
*/
   @Override
-  public void storeCertificate(X509Certificate certificate)
+  public void storeCertificate(String pemEncodedCert, boolean force)
   throws CertificateException {
 CertificateCodec certificateCodec = new CertificateCodec(securityConfig);
 try {
-  certificateCodec.writeCertificate(
-  new X509CertificateHolder(certificate.getEncoded()));
-} catch (IOException | CertificateEncodingException e) {
+  Path basePath = securityConfig.getCertificateLocation();
+  String certName;
+  X509Certificate cert =
+  CertificateCodec.getX509Certificate(pemEncodedCert);
+  certName = String.format(CERT_FILE_NAME_FORMAT,
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793309#comment-16793309
 ] 

Hadoop QA commented on HADOOP-16191:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16191 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962558/HADOOP-16191.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a0a064f95567 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2627dad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16055/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16055/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191

[jira] [Commented] (HADOOP-16182) Update abfs storage back-end with "close" flag when application is done writing to a file

2019-03-14 Thread Vishwajeet Dusane (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793307#comment-16793307
 ] 

Vishwajeet Dusane commented on HADOOP-16182:


Thank you [~DanielZhou] . [~ste...@apache.org] Could you please also review 
this patch for commit?

> Update abfs storage back-end with "close" flag when application is done 
> writing to a file 
> --
>
> Key: HADOOP-16182
> URL: https://issues.apache.org/jira/browse/HADOOP-16182
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HADOOP-16182.001.patch
>
>
> As part of Azure Data Lake Storage Gen2 notifications design, customers are 
> interested in knowing when a client is done writing to a file so they can 
> take certain actions like initiate a pipeline, or replicate a file, or start 
> certain processing. To satisfy that, ABFS client should send "close" flag 
> during Flush rest API when invoked by OutputStream::close() API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #594: HDDS-1246. Add ozone delegation token 
utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#issuecomment-473155720
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 21 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1280 | trunk passed |
   | +1 | compile | 108 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 110 | trunk passed |
   | +1 | shadedclient | 731 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 150 | trunk passed |
   | +1 | javadoc | 85 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 92 | the patch passed |
   | +1 | compile | 93 | the patch passed |
   | +1 | javac | 93 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 82 | the patch passed |
   | +1 | shellcheck | 27 | There were no new shellcheck issues. |
   | +1 | shelldocs | 14 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 816 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 155 | the patch passed |
   | +1 | javadoc | 77 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 28 | client in the patch passed. |
   | +1 | unit | 94 | ozonefs in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4284 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/594 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  shellcheck  shelldocs  |
   | uname | Linux 9e0430b6eea6 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2627dad |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/2/testReport/ |
   | Max. process+thread count | 2913 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozonefs 
U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #579: HDDS-761. Create S3 subcommand to run S3 related operations

2019-03-14 Thread GitBox
bharatviswa504 merged pull request #579: HDDS-761. Create S3 subcommand to run 
S3 related operations
URL: https://github.com/apache/hadoop/pull/579
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile Code

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile 
Code
URL: https://github.com/apache/hadoop/pull/610#issuecomment-473149253
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 972 | trunk passed |
   | +1 | compile | 111 | trunk passed |
   | +1 | checkstyle | 40 | trunk passed |
   | +1 | mvnsite | 67 | trunk passed |
   | +1 | shadedclient | 706 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 86 | trunk passed |
   | +1 | javadoc | 30 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 59 | the patch passed |
   | +1 | compile | 113 | the patch passed |
   | +1 | javac | 113 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-mapreduce-project/hadoop-mapreduce-client: 
The patch generated 3 new + 56 unchanged - 20 fixed = 59 total (was 76) |
   | +1 | mvnsite | 65 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 691 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 98 | the patch passed |
   | +1 | javadoc | 31 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 309 | hadoop-mapreduce-client-core in the patch passed. |
   | -1 | unit | 8229 | hadoop-mapreduce-client-jobclient in the patch failed. |
   | -1 | asflicense | 42 | The patch generated 1 ASF License warnings. |
   | | | 11790 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.mapred.TestLazyOutput |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/610 |
   | JIRA Issue | MAPREDUCE-7193 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux bcb1af03ac72 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2627dad |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1599 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 U: hadoop-mapreduce-project/hadoop-mapreduce-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Status: Patch Available  (was: Open)

> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.1.2, 3.0.3, 2.9.2, 2.10.0, 3.3.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 
>  
> Another improvement is logging optimization. Changing log level to debug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA 
for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473147758
 
 
   Thanks @ajayydv  for the update. We are almost there. Just few last issues 
as commented inline.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265840907
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSecretManager.java
 ##
 @@ -90,8 +90,9 @@ public OzoneDelegationTokenSecretManager(OzoneConfiguration 
conf,
 service, LOG);
 currentTokens = new ConcurrentHashMap();
 this.tokenRemoverScanInterval = dtRemoverScanInterval;
-this.store = new OzoneSecretStore(conf);
-this.s3SecretManager = s3SecretManager;
+this.s3SecretManager = (S3SecretManagerImpl) s3SecretManager;
 
 Review comment:
   why do we need to change s3secret manager here from interface to impl? seems 
not relate to this ticket.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265840558
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/LongCodec.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.utils.db;
+
+import com.google.common.primitives.Longs;
+
+import java.io.IOException;
+
+/**
+ * Codec to convert String to/from byte array.
 
 Review comment:
   NIT: this should be Long


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265840441
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -349,29 +441,39 @@ public X509Certificate queryCertificate(String query) {
   }
 
   /**
-   * Stores the Certificate  for this client. Don't use this api to add
-   * trusted certificates of other components.
+   * Stores the Certificate  for this client. Don't use this api to add trusted
+   * certificates of others.
*
-   * @param certificate - X509 Certificate
+   * @param pemEncodedCert - pem encoded X509 Certificate
+   * @param force - override any existing file
* @throws CertificateException - on Error.
+   *
*/
   @Override
-  public void storeCertificate(X509Certificate certificate)
+  public void storeCertificate(String pemEncodedCert, boolean force)
   throws CertificateException {
 CertificateCodec certificateCodec = new CertificateCodec(securityConfig);
 try {
-  certificateCodec.writeCertificate(
-  new X509CertificateHolder(certificate.getEncoded()));
-} catch (IOException | CertificateEncodingException e) {
+  Path basePath = securityConfig.getCertificateLocation();
+  String certName;
+  X509Certificate cert =
+  CertificateCodec.getX509Certificate(pemEncodedCert);
+  certName = String.format(CERT_FILE_NAME_FORMAT,
 
 Review comment:
   NIT: move line 458 to 461 and combine them together.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265840287
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/DefaultCertificateClient.java
 ##
 @@ -131,34 +187,72 @@ public PublicKey getPublicKey() {
   }
 
   /**
-   * Returns the certificate  of the specified component if it exists on the
-   * local system.
+   * Returns the default certificate of given client if it exists.
*
* @return certificate or Null if there is no data.
*/
   @Override
   public X509Certificate getCertificate() {
-if(x509Certificate != null){
+if (x509Certificate != null) {
   return x509Certificate;
 }
 
-Path certPath = securityConfig.getCertificateLocation();
-if (OzoneSecurityUtil.checkIfFileExist(certPath,
-securityConfig.getCertificateFileName())) {
-  CertificateCodec certificateCodec =
-  new CertificateCodec(securityConfig);
-  try {
-X509CertificateHolder x509CertificateHolder =
-certificateCodec.readCertificate();
-x509Certificate =
-CertificateCodec.getX509Certificate(x509CertificateHolder);
-  } catch (java.security.cert.CertificateException | IOException e) {
-getLogger().error("Error reading certificate.", e);
-  }
+if (certSerialId == null) {
+  getLogger().error("Default certificate serial id is not set. Can't " +
+  "locate the default certificate for this client.");
+  return null;
+}
+// Refresh the cache from file system.
+loadAllCertificates();
 
 Review comment:
   Should we loadAllCertificates() again only if the map does not contain it? 
   
   Also, in the contsructor when we call loadAllCertificates(), should we asset 
the passing in certSerialId is loaded into the map from file system?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #579: HDDS-761. Create S3 subcommand to run S3 related operations

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #579: HDDS-761. Create S3 subcommand to run S3 
related operations
URL: https://github.com/apache/hadoop/pull/579#issuecomment-473139432
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1082 | trunk passed |
   | +1 | compile | 967 | trunk passed |
   | +1 | checkstyle | 219 | trunk passed |
   | -1 | mvnsite | 46 | integration-test in trunk failed. |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 127 | trunk passed |
   | +1 | javadoc | 182 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | -1 | mvninstall | 25 | integration-test in the patch failed. |
   | +1 | compile | 916 | the patch passed |
   | +1 | javac | 916 | the patch passed |
   | +1 | checkstyle | 220 | the patch passed |
   | +1 | mvnsite | 209 | the patch passed |
   | +1 | shellcheck | 25 | There were no new shellcheck issues. |
   | +1 | shelldocs | 33 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdds/docs hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 142 | the patch passed |
   | +1 | javadoc | 182 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | docs in the patch passed. |
   | +1 | unit | 54 | common in the patch passed. |
   | +1 | unit | 36 | dist in the patch passed. |
   | -1 | unit | 926 | integration-test in the patch failed. |
   | +1 | unit | 66 | ozone-manager in the patch passed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7830 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/579 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  compile  javac  javadoc  mvninstall  shadedclient  findbugs  
checkstyle  |
   | uname | Linux 8244fe72a8da 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2627dad |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/testReport/ |
   | Max. process+thread count | 4455 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-579/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to 

[jira] [Updated] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Description: 
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

 

Another improvement is optimize logging and remove meaningless logs or change 
to debug level

  was:
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 


> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 
>  
> Another improvement is optimize logging and remove meaningless logs or change 
> to debug level



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Description: 
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

 

Another improvement is logging optimization. Changing log level to debug.

  was:
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

 

Another improvement is logging optimization.


> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 
>  
> Another improvement is logging optimization. Changing log level to debug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Description: 
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

 

Another improvement is logging optimization.

  was:
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

 

Another improvement is optimize logging and remove meaningless logs or change 
to debug level


> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 
>  
> Another improvement is logging optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: improvements for copyFile/copyDirectory and logging

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Summary: AliyunOSS: improvements for copyFile/copyDirectory and logging  
(was: AliyunOSS: returned statuses of copyFile and copyDirectory are ignored)

> AliyunOSS: improvements for copyFile/copyDirectory and logging
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: returned statuses of copyFile and copyDirectory are ignored

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Attachment: HADOOP-16191.001.patch

> AliyunOSS: returned statuses of copyFile and copyDirectory are ignored
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16191.001.patch
>
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473132399
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1030 | trunk passed |
   | +1 | compile | 964 | trunk passed |
   | +1 | checkstyle | 237 | trunk passed |
   | -1 | mvnsite | 40 | container-service in trunk failed. |
   | -1 | mvnsite | 40 | server-scm in trunk failed. |
   | -1 | mvnsite | 38 | integration-test in trunk failed. |
   | -1 | mvnsite | 34 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1281 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 31 | container-service in trunk failed. |
   | -1 | findbugs | 34 | server-scm in trunk failed. |
   | -1 | findbugs | 37 | ozone-manager in trunk failed. |
   | +1 | javadoc | 243 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 17 | dist in the patch failed. |
   | +1 | compile | 932 | the patch passed |
   | +1 | cc | 932 | the patch passed |
   | +1 | javac | 932 | the patch passed |
   | +1 | checkstyle | 204 | the patch passed |
   | +1 | mvnsite | 285 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 324 | the patch passed |
   | +1 | javadoc | 241 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 100 | common in the patch passed. |
   | -1 | unit | 78 | container-service in the patch failed. |
   | +1 | unit | 116 | server-scm in the patch passed. |
   | +1 | unit | 44 | common in the patch passed. |
   | +1 | unit | 29 | dist in the patch passed. |
   | -1 | unit | 880 | integration-test in the patch failed. |
   | -1 | unit | 72 | ozone-manager in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 8504 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux 40797069f256 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-mvnsite-hadoop-hdds_container-service.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-mvnsite-hadoop-hdds_server-scm.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/18/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be 
taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-473129276
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 53 | Maven dependency ordering for branch |
   | +1 | mvninstall | 999 | trunk passed |
   | +1 | compile | 1020 | trunk passed |
   | +1 | checkstyle | 181 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 913 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 43 | trunk passed |
   | +1 | javadoc | 46 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 63 | the patch passed |
   | +1 | compile | 963 | the patch passed |
   | +1 | javac | 963 | the patch passed |
   | +1 | checkstyle | 186 | the patch passed |
   | +1 | mvnsite | 68 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 602 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 45 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 109 | server-scm in the patch passed. |
   | -1 | unit | 1123 | integration-test in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 6629 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/612 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux bbfae4704466 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/2/testReport/ |
   | Max. process+thread count | 3755 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #612: HDDS-1285. Implement actions need to be 
taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612#issuecomment-473128833
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1000 | trunk passed |
   | +1 | compile | 998 | trunk passed |
   | +1 | checkstyle | 198 | trunk passed |
   | +1 | mvnsite | 84 | trunk passed |
   | +1 | shadedclient | 946 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 38 | trunk passed |
   | +1 | javadoc | 49 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 58 | the patch passed |
   | +1 | compile | 982 | the patch passed |
   | +1 | javac | 982 | the patch passed |
   | +1 | checkstyle | 192 | the patch passed |
   | +1 | mvnsite | 68 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 621 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 46 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 109 | server-scm in the patch passed. |
   | -1 | unit | 1152 | integration-test in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6660 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/612 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 0adadc1748e7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/1/testReport/ |
   | Max. process+thread count | 3982 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-612/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473123532
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 59 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1324 | trunk passed |
   | +1 | compile | 1429 | trunk passed |
   | +1 | checkstyle | 265 | trunk passed |
   | +1 | mvnsite | 368 | trunk passed |
   | +1 | shadedclient | 1504 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 63 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 245 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | -1 | mvninstall | 31 | integration-test in the patch failed. |
   | +1 | compile | 970 | the patch passed |
   | +1 | cc | 970 | the patch passed |
   | +1 | javac | 970 | the patch passed |
   | +1 | checkstyle | 250 | the patch passed |
   | +1 | mvnsite | 330 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 375 | the patch passed |
   | +1 | javadoc | 271 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 90 | common in the patch passed. |
   | -1 | unit | 86 | container-service in the patch failed. |
   | -1 | unit | 153 | server-scm in the patch failed. |
   | +1 | unit | 54 | common in the patch passed. |
   | +1 | unit | 39 | dist in the patch passed. |
   | -1 | unit | 950 | integration-test in the patch failed. |
   | -1 | unit | 65 | ozone-manager in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 10006 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineUtils |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux 657729f16ec8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/17/testReport/ |
   | Max. process+thread count | 2780 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] bharatviswa504 merged pull request #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
bharatviswa504 merged pull request #611: HDDS-1265. "ozone sh s3 getsecret" 
throws Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile Code

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile 
Code
URL: https://github.com/apache/hadoop/pull/610#issuecomment-473111002
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 52 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1214 | trunk passed |
   | +1 | compile | 138 | trunk passed |
   | +1 | checkstyle | 47 | trunk passed |
   | +1 | mvnsite | 81 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 103 | trunk passed |
   | +1 | javadoc | 37 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | compile | 135 | the patch passed |
   | +1 | javac | 135 | the patch passed |
   | -0 | checkstyle | 44 | hadoop-mapreduce-project/hadoop-mapreduce-client: 
The patch generated 4 new + 56 unchanged - 20 fixed = 60 total (was 76) |
   | +1 | mvnsite | 74 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 123 | the patch passed |
   | +1 | javadoc | 33 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 312 | hadoop-mapreduce-client-core in the patch passed. |
   | -1 | unit | 7559 | hadoop-mapreduce-client-jobclient in the patch failed. |
   | -1 | asflicense | 36 | The patch generated 1 ASF License warnings. |
   | | | 11761 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/610 |
   | JIRA Issue | MAPREDUCE-7193 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 003f60fc4836 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1174 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 U: hadoop-mapreduce-project/hadoop-mapreduce-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #610: [MAPREDUCE-7193] Review of CombineFile Code

2019-03-14 Thread GitBox
hadoop-yetus commented on a change in pull request #610: [MAPREDUCE-7193] 
Review of CombineFile Code
URL: https://github.com/apache/hadoop/pull/610#discussion_r265811073
 
 

 ##
 File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
 ##
 @@ -568,17 +568,20 @@ private void addCreatedSplit(List splitList,
*/
   @VisibleForTesting
   static class OneFileInfo {
-private long fileSize;   // size of the file
-private OneBlockInfo[] blocks;   // all blocks in this file
-
-OneFileInfo(FileStatus stat, Configuration conf,
-boolean isSplitable,
-HashMap> rackToBlocks,
-HashMap blockToNodes,
-HashMap> nodeToBlocks,
-HashMap> rackToNodes,
-long maxSize)
-throws IOException {
+
+/** Size of the file. */
+private long fileSize;
+
+/** All blocks in this file. */
+private OneBlockInfo[] blocks; 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #612: HDDS-1285. Implement actions need to be taken after chill mode exit w…

2019-03-14 Thread GitBox
bharatviswa504 opened a new pull request #612: HDDS-1285. Implement actions 
need to be taken after chill mode exit w…
URL: https://github.com/apache/hadoop/pull/612
 
 
   …ait time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws 
Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611#issuecomment-473107850
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 96 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1098 | trunk passed |
   | +1 | compile | 98 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 64 | trunk passed |
   | +1 | shadedclient | 806 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 38 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 63 | the patch passed |
   | +1 | compile | 89 | the patch passed |
   | +1 | javac | 89 | the patch passed |
   | +1 | checkstyle | 23 | the patch passed |
   | +1 | mvnsite | 51 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 792 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 45 | the patch passed |
   | +1 | javadoc | 34 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | ozone-manager in the patch passed. |
   | -1 | unit | 703 | integration-test in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4222 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.om.TestOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/611 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 6ce3f5fe27f1 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/3/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/3/testReport/ |
   | Max. process+thread count | 4680 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile Code

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #610: [MAPREDUCE-7193] Review of CombineFile 
Code
URL: https://github.com/apache/hadoop/pull/610#issuecomment-473107160
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1060 | trunk passed |
   | +1 | compile | 109 | trunk passed |
   | +1 | checkstyle | 39 | trunk passed |
   | +1 | mvnsite | 67 | trunk passed |
   | +1 | shadedclient | 719 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 85 | trunk passed |
   | +1 | javadoc | 34 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 56 | the patch passed |
   | +1 | compile | 106 | the patch passed |
   | +1 | javac | 106 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-mapreduce-project/hadoop-mapreduce-client: 
The patch generated 4 new + 56 unchanged - 20 fixed = 60 total (was 76) |
   | +1 | mvnsite | 65 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 674 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 99 | the patch passed |
   | +1 | javadoc | 30 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 300 | hadoop-mapreduce-client-core in the patch passed. |
   | -1 | unit | 8326 | hadoop-mapreduce-client-jobclient in the patch failed. |
   | -1 | asflicense | 33 | The patch generated 1 ASF License warnings. |
   | | | 11928 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/610 |
   | JIRA Issue | MAPREDUCE-7193 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 637ab6457183 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 1615 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 U: hadoop-mapreduce-project/hadoop-mapreduce-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-610/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473105862
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1044 | trunk passed |
   | +1 | compile | 966 | trunk passed |
   | +1 | checkstyle | 203 | trunk passed |
   | -1 | mvnsite | 36 | container-service in trunk failed. |
   | -1 | mvnsite | 38 | server-scm in trunk failed. |
   | -1 | mvnsite | 37 | integration-test in trunk failed. |
   | -1 | mvnsite | 35 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1214 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 32 | container-service in trunk failed. |
   | -1 | findbugs | 31 | server-scm in trunk failed. |
   | -1 | findbugs | 32 | ozone-manager in trunk failed. |
   | +1 | javadoc | 240 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | -1 | mvninstall | 17 | dist in the patch failed. |
   | +1 | compile | 930 | the patch passed |
   | +1 | cc | 930 | the patch passed |
   | +1 | javac | 930 | the patch passed |
   | +1 | checkstyle | 203 | the patch passed |
   | +1 | mvnsite | 283 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 695 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 320 | the patch passed |
   | +1 | javadoc | 239 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 99 | common in the patch passed. |
   | -1 | unit | 77 | container-service in the patch failed. |
   | +1 | unit | 115 | server-scm in the patch passed. |
   | +1 | unit | 44 | common in the patch passed. |
   | +1 | unit | 30 | dist in the patch passed. |
   | -1 | unit | 881 | integration-test in the patch failed. |
   | +1 | unit | 69 | ozone-manager in the patch passed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 8309 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux e0ab2a69d251 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-mvnsite-hadoop-hdds_container-service.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-mvnsite-hadoop-hdds_server-scm.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/16/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 

[GitHub] [hadoop] hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws 
Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611#issuecomment-473105531
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 979 | trunk passed |
   | +1 | compile | 98 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 71 | trunk passed |
   | +1 | shadedclient | 768 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 33 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 57 | the patch passed |
   | +1 | compile | 85 | the patch passed |
   | +1 | javac | 85 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 46 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 42 | the patch passed |
   | +1 | javadoc | 29 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | -1 | unit | 610 | integration-test in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3783 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/611 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux f9e5873f0390 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/2/testReport/ |
   | Max. process+thread count | 3880 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-03-14 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793153#comment-16793153
 ] 

Siyao Meng commented on HADOOP-13656:
-

Thanks for the patch. Some suggestions:

1. Line 238: I believe you could use
{code:java}
CommandFormat cf = new CommandFormat(0, 1, "immediate", "fs");
{code}
instead of cf.addOptionWithValue() to keep consistent coding style.

2. Line 226: "[-immediate] [-fs  ]"; <- extra whitespace between > and 
last ]. keep the style consistent with your doc.

3. Line 285: remove extra trailing whitespace.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, 
> HADOOP-13656.003.patch
>
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473097706
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 986 | trunk passed |
   | +1 | compile | 969 | trunk passed |
   | +1 | checkstyle | 190 | trunk passed |
   | +1 | mvnsite | 351 | trunk passed |
   | +1 | shadedclient | 1260 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 59 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 266 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | -1 | mvninstall | 25 | integration-test in the patch failed. |
   | +1 | compile | 919 | the patch passed |
   | +1 | cc | 919 | the patch passed |
   | +1 | javac | 919 | the patch passed |
   | +1 | checkstyle | 189 | the patch passed |
   | +1 | mvnsite | 308 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 658 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 344 | the patch passed |
   | +1 | javadoc | 264 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 74 | common in the patch passed. |
   | -1 | unit | 84 | container-service in the patch failed. |
   | +1 | unit | 122 | server-scm in the patch passed. |
   | +1 | unit | 47 | common in the patch passed. |
   | +1 | unit | 34 | dist in the patch passed. |
   | -1 | unit | 702 | integration-test in the patch failed. |
   | +1 | unit | 58 | ozone-manager in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 8284 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux f91e458bcc76 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/testReport/ |
   | Max. process+thread count | 3922 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/15/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[GitHub] [hadoop] hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws 
Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611#issuecomment-473088240
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 999 | trunk passed |
   | -1 | compile | 26 | ozone-manager in trunk failed. |
   | +1 | checkstyle | 20 | trunk passed |
   | -1 | mvnsite | 27 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 705 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 26 | ozone-manager in trunk failed. |
   | +1 | javadoc | 23 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 21 | ozone-manager in the patch failed. |
   | -1 | compile | 21 | ozone-manager in the patch failed. |
   | -1 | javac | 21 | ozone-manager in the patch failed. |
   | +1 | checkstyle | 14 | the patch passed |
   | -1 | mvnsite | 22 | ozone-manager in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 715 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 24 | ozone-manager in the patch failed. |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | ozone-manager in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2828 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/611 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a5c6876163da 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/branch-compile-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-mvninstall-hadoop-ozone_ozone-manager.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-compile-hadoop-ozone_ozone-manager.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-compile-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-611/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #579: HDDS-761. Create S3 subcommand to run S3 related operations

2019-03-14 Thread GitBox
bharatviswa504 commented on issue #579: HDDS-761. Create S3 subcommand to run 
S3 related operations
URL: https://github.com/apache/hadoop/pull/579#issuecomment-473076170
 
 
   +1 LGTM.
   Pending jenkins


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
bharatviswa504 commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" 
throws Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611#issuecomment-473074050
 
 
   LGTM.
   Can we add a UT for this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
vivekratnavel commented on issue #611: HDDS-1265. "ozone sh s3 getsecret" 
throws Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611#issuecomment-473073044
 
 
   @bharatviswa504 @swagle @avijayanhwx @elek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel opened a new pull request #611: HDDS-1265. "ozone sh s3 getsecret" throws Null Pointer Exception for unsecured clusters

2019-03-14 Thread GitBox
vivekratnavel opened a new pull request #611: HDDS-1265. "ozone sh s3 
getsecret" throws Null Pointer Exception for unsecured clusters
URL: https://github.com/apache/hadoop/pull/611
 
 
   "ozone sh s3 getsecret" command throws a Null Pointer Exception.
   
   This patch fixes it by showing the following message:
   
   ```
   hadoop@fa14f2633ba4:~$ ozone sh s3 getsecret
   This command is not supported in unsecure clusters.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16188) s3a rename failed during copy, "Unable to copy part" + 200 error code

2019-03-14 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793058#comment-16793058
 ] 

Aaron Fabbri commented on HADOOP-16188:
---

uhhh.. I'm shaking my head here. S3 is sending a response that the SDK fails on 
so the connection doesn't time out?

Not saying we shouldn't retry ourselves, just commentary on the state of the S3 
storage stack. Feels like the SDK should retry.



> s3a rename failed during copy, "Unable to copy part" + 200 error code
> -
>
> Key: HADOOP-16188
> URL: https://issues.apache.org/jira/browse/HADOOP-16188
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Error during a rename where AWS S3 seems to have some internal error *which 
> is not retried and returns status code 200"
> {code}
> com.amazonaws.SdkClientException: Unable to copy part: We encountered an 
> internal error. Please try again. (Service: Amazon S3; Status Code: 200; 
> Error Code: InternalError;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265761945
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/O3fsDtFetcher.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.DtFetcher;
+import org.apache.hadoop.security.token.Token;
+
+
+/**
+ * A DT fetcher for OzoneFileSystem.
+ * It is only needed for the `hadoop dtutil` command.
+ */
+public class O3fsDtFetcher implements DtFetcher {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(O3fsDtFetcher.class);
+
+  private static final String SERVICE_NAME = OzoneConsts.OZONE_URI_SCHEME;
+
+  private static final String FETCH_FAILED =
+  "Fetch ozone delegation token failed";
+
+  /**
+   * Returns the service name for O3fs, which is also a valid URL prefix.
+   */
+  public Text getServiceName() {
+return new Text(SERVICE_NAME);
+  }
+
+  public boolean isTokenRequired() {
 
 Review comment:
   No, we can't. It is an implementation of DtFetcher interface method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #609: HADOOP-16193. add extra S3A MPU test to 
see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-473056442
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1004 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 647 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 44 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 28 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 684 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3052 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/609 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a628993d296c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 091a664 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-609/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #605: HDDS-1283. Fix the dynamic documentation of basic s3 client usage

2019-03-14 Thread GitBox
ajayydv commented on issue #605: HDDS-1283. Fix the dynamic documentation of 
basic s3 client usage
URL: https://github.com/apache/hadoop/pull/605#issuecomment-473054101
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
ajayydv commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265756537
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/O3fsDtFetcher.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.DtFetcher;
+import org.apache.hadoop.security.token.Token;
+
+
+/**
+ * A DT fetcher for OzoneFileSystem.
+ * It is only needed for the `hadoop dtutil` command.
+ */
+public class O3fsDtFetcher implements DtFetcher {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(O3fsDtFetcher.class);
+
+  private static final String SERVICE_NAME = OzoneConsts.OZONE_URI_SCHEME;
+
+  private static final String FETCH_FAILED =
+  "Fetch ozone delegation token failed";
+
+  /**
+   * Returns the service name for O3fs, which is also a valid URL prefix.
+   */
+  public Text getServiceName() {
+return new Text(SERVICE_NAME);
+  }
+
+  public boolean isTokenRequired() {
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #601: HDDS-1119. DN get OM 
certificate from SCM CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#discussion_r265755116
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
 ##
 @@ -78,29 +79,36 @@ public UserGroupInformation verify(String user, String 
tokenStr)
 throw new BlockTokenException("Failed to decode token : " + tokenStr);
   }
 
-  // TODO: revisit this when caClient is ready, skip signature check now.
-  /**
-   * the final code should like
-   * if (caClient == null) {
-   *   throw new SCMSecurityException("Certificate client not available to
-   *   validate token");
-   * }
-   */
-  if (caClient != null) {
-X509Certificate singerCert = caClient.queryCertificate(
-"certId=" + tokenId.getOmCertSerialId());
-if (singerCert == null) {
-  throw new BlockTokenException("Can't find signer certificate " +
-  "(OmCertSerialId: " + tokenId.getOmCertSerialId() +
-  ") of the block token for user: " + tokenId.getUser());
-}
-Boolean validToken = caClient.verifySignature(tokenId.getBytes(),
-token.getPassword(), singerCert);
-if (!validToken) {
-  throw new BlockTokenException("Invalid block token for user: " +
-  tokenId.getUser());
-}
+  if (caClient == null) {
+throw new SCMSecurityException("Certificate client not available " +
+"to validate token");
   }
+
+  X509Certificate singerCert;
+  try {
+singerCert =
+caClient.getCertificateFromLocal(tokenId.getOmCertSerialId());
 
 Review comment:
   Make senses to me. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265752989
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -445,7 +444,18 @@ public void removeBucketAcls(
   @Override
   public Token getDelegationToken(Text renewer)
   throws IOException {
-return ozoneManagerClient.getDelegationToken(renewer);
+
+Token token =
+ozoneManagerClient.getDelegationToken(renewer);
+if (token != null) {
+  Text dtService =
+  getOMProxyProvider().getProxy().getDelegationTokenService();
+  token.setService(dtService);
 
 Review comment:
   Agree, if those changes are not there in ozone-0.4, the cherry-pick and 
merge will be messy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] BELUGABEHR opened a new pull request #610: [MAPREDUCE-7193] Review of CombineFile Code

2019-03-14 Thread GitBox
BELUGABEHR opened a new pull request #610: [MAPREDUCE-7193] Review of 
CombineFile Code
URL: https://github.com/apache/hadoop/pull/610
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream Specification

2019-03-14 Thread GitBox
ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream 
Specification
URL: https://github.com/apache/hadoop/pull/575#discussion_r265749117
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractFSContractTestBase.java
 ##
 @@ -272,7 +272,7 @@ protected void handleRelaxedException(String action,
 if (getContract().isSupported(SUPPORTS_STRICT_EXCEPTIONS, false)) {
   throw e;
 }
-LOG.warn("The expected exception {}  was not the exception class" +
+LOG.warn("The expected exception {} was not the exception class" +
 
 Review comment:
   IIRC an extra last parameter of type Throwable is allowed for Logger


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16185) S3Guard: Optimize performance of handling OOB operations in non-authoritative mode

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16185:

Affects Version/s: (was: 3.1.0)
   3.3.0

> S3Guard: Optimize performance of handling OOB operations in non-authoritative 
> mode
> --
>
> Key: HADOOP-16185
> URL: https://issues.apache.org/jira/browse/HADOOP-16185
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Priority: Minor
>
> HADOOP-15999 modifies the S3Guard's non-authoritative mode, so when S3Guard 
> runs non-authoritative, every {{fs.getFileStatus}} will check S3 because we 
> don't handle the MetadataStore as a single source of truth. This has a 
> negative performance impact.
>  
> In other words HADOOP-15999 is going to reinstate the HEAD on every read, so 
> making non-auth S3Guard a bit slower. We could think about addressing that by 
> moving the checks into the input stream itself. That is: the first GET which 
> returns data will also act as the metadata check. That'd mean the read 
> context will need updating with some "metastoreProcessHeader" callback to 
> invoke on the first GET.
> The good news is that because it's reading a file, its only one HTTP HEAD 
> request: no need to do any of the other two directory probes except in the 
> case that the file isn't there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793031#comment-16793031
 ] 

Steve Loughran commented on HADOOP-15999:
-

really close to getting in, ran lots of tests, am happy. I tried adding a new 
test but failed and gave up HADOOP-16193 is the outcome there.

One more change to request: skip going to s3 if the file checked is a 
directory. Because if the dest is also a directory, there's no difference.

Pro: misses out the two failing HEAD calls and an expensive LIST whose output 
is discarded

Con: doesn't catch up on the special failure case: someone has taken a 
directory path /a/b/ and overwritten it with a file /a/b  . 
'
If we did want to worry about that, then rather than doing the whole 
s3GetFileStatus call, we only need to execute a single getObjectMetadata for 
the key "a/b" and, if something is actually there do an update.

That would still be hitting the store, but it'd only be doing 1/3 as many 
requests



> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, HADOOP-15999.008.patch, 
> HADOOP-15999.009.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-14 Thread GitBox
ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-473040233
 
 
   pulled your branch and ran into other compile issues 
   ``` [ERROR] Errors:
   [ERROR]   TestZKSignerSecretProvider.testMultiple1:247->testMultiple:301 
NoSuchMethod or...
   [ERROR]   TestZKSignerSecretProvider.testMultiple2:252->testMultiple:301 
NoSuchMethod or...
   [ERROR]   TestZKSignerSecretProvider.testOne:87 NoSuchMethod 
org.mockito.Mockito.timeout...
   [INFO]
   [ERROR] Tests run: 138, Failures: 0, Errors: 3, Skipped: 0```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16185) S3Guard: Optimize performance of handling OOB operations in non-authoritative mode

2019-03-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793019#comment-16793019
 ] 

Steve Loughran commented on HADOOP-16185:
-

thanks for filing this.

> S3Guard: Optimize performance of handling OOB operations in non-authoritative 
> mode
> --
>
> Key: HADOOP-16185
> URL: https://issues.apache.org/jira/browse/HADOOP-16185
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Priority: Minor
>
> HADOOP-15999 modifies the S3Guard's non-authoritative mode, so when S3Guard 
> runs non-authoritative, every {{fs.getFileStatus}} will check S3 because we 
> don't handle the MetadataStore as a single source of truth. This has a 
> negative performance impact.
>  
> In other words HADOOP-15999 is going to reinstate the HEAD on every read, so 
> making non-auth S3Guard a bit slower. We could think about addressing that by 
> moving the checks into the input stream itself. That is: the first GET which 
> returns data will also act as the metadata check. That'd mean the read 
> context will need updating with some "metastoreProcessHeader" callback to 
> invoke on the first GET.
> The good news is that because it's reading a file, its only one HTTP HEAD 
> request: no need to do any of the other two directory probes except in the 
> case that the file isn't there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16185) S3Guard: Optimize performance of handling OOB operations in non-authoritative mode

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16185:

Priority: Minor  (was: Major)

> S3Guard: Optimize performance of handling OOB operations in non-authoritative 
> mode
> --
>
> Key: HADOOP-16185
> URL: https://issues.apache.org/jira/browse/HADOOP-16185
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Priority: Minor
>
> HADOOP-15999 modifies the S3Guard's non-authoritative mode, so when S3Guard 
> runs non-authoritative, every {{fs.getFileStatus}} will check S3 because we 
> don't handle the MetadataStore as a single source of truth. This has a 
> negative performance impact.
>  
> In other words HADOOP-15999 is going to reinstate the HEAD on every read, so 
> making non-auth S3Guard a bit slower. We could think about addressing that by 
> moving the checks into the input stream itself. That is: the first GET which 
> returns data will also act as the metadata check. That'd mean the read 
> context will need updating with some "metastoreProcessHeader" callback to 
> invoke on the first GET.
> The good news is that because it's reading a file, its only one HTTP HEAD 
> request: no need to do any of the other two directory probes except in the 
> case that the file isn't there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-03-14 Thread GitBox
steveloughran commented on issue #609: HADOOP-16193. add extra S3A MPU test to 
see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609#issuecomment-473036424
 
 
   Tested: S3 ireland. Remember to use -Dscale when running this test suite, 
even though this case isn't a slow one


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16193:
---

Assignee: Steve Loughran

> add extra S3A MPU test to see what happens if a file is created during the MPU
> --
>
> Key: HADOOP-16193
> URL: https://issues.apache.org/jira/browse/HADOOP-16193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Proposed extra test for the S3A MPU: if you create and then delete a file 
> while an MPU is in progress, when you finally complete the MPU the new data 
> is present.
> This verifies that the other FS operations don't somehow cancel the 
> in-progress upload, and that eventual consistency brings the latest value out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #609: HADOOP-16193. add extra S3A MPU test to see what happens if a file is created during the MPU

2019-03-14 Thread GitBox
steveloughran opened a new pull request #609: HADOOP-16193. add extra S3A MPU 
test to see what happens if a file is created during the MPU
URL: https://github.com/apache/hadoop/pull/609
 
 
   HADOOP-16193. add extra S3A MPU test to see what happens if a file is 
created during the MPU
   
   Change-Id: I15942ff8c7a772e1bf718bccbe4a249d20fa3ef2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16193:

Summary: add extra S3A MPU test to see what happens if a file is created 
during the MPU  (was: add extra S3A MPU test to see what happens if a file is 
created then deleted mid-write)

> add extra S3A MPU test to see what happens if a file is created during the MPU
> --
>
> Key: HADOOP-16193
> URL: https://issues.apache.org/jira/browse/HADOOP-16193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> Proposed extra test for the S3A MPU: if you create and then delete a file 
> while an MPU is in progress, when you finally complete the MPU the new data 
> is present.
> This verifies that the other FS operations don't somehow cancel the 
> in-progress upload, and that eventual consistency brings the latest value out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created then deleted mid-write

2019-03-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16193:
---

 Summary: add extra S3A MPU test to see what happens if a file is 
created then deleted mid-write
 Key: HADOOP-16193
 URL: https://issues.apache.org/jira/browse/HADOOP-16193
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


Proposed extra test for the S3A MPU: if you create and then delete a file while 
an MPU is in progress, when you finally complete the MPU the new data is 
present.

This verifies that the other FS operations don't somehow cancel the in-progress 
upload, and that eventual consistency brings the latest value out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags/version number to detect changed source files

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15625:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch backported into branch-3.2, closing as done!

thank you to everyone who  helped here -it's been through a few iterations and 
now I'm happy. I'm also staring at other bits of the code thinking "if we want 
to consistently propagate etag data, how do I do it"

> S3A input stream to use etags/version number to detect changed source files
> ---
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch, HADOOP-15625-013-delta.patch, 
> HADOOP-15625-013.patch, HADOOP-15625-014.patch, HADOOP-15625-015.patch, 
> HADOOP-15625-015.patch, HADOOP-15625-016.patch, HADOOP-15625-017.patch, 
> HADOOP-15625-branch-3.2-018.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags/version number to detect changed source files

2019-03-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15625:

Fix Version/s: 3.2.1

> S3A input stream to use etags/version number to detect changed source files
> ---
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch, HADOOP-15625-013-delta.patch, 
> HADOOP-15625-013.patch, HADOOP-15625-014.patch, HADOOP-15625-015.patch, 
> HADOOP-15625-015.patch, HADOOP-15625-016.patch, HADOOP-15625-017.patch, 
> HADOOP-15625-branch-3.2-018.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-14 Thread GitBox
ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-473030090
 
 
   +1 LGTM but have not tested yet. Working on that now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-14 Thread GitBox
ajfabbri commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-473029760
 
 
   How do you usually apply these pull requests locally for testing? Anything 
easier than (1) adding remote (2) checkout branch from new remote, etc.?  
Almost wish there was a "click to download .patch" here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16192) CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and doesn't update backoff when refreshed

2019-03-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792965#comment-16792965
 ] 

Hadoop QA commented on HADOOP-16192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16192 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962505/HADOOP-16192.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5c770838cc1a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d60673c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16054/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16054/testReport/ |
| Max. process+thread count | 1625 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16054/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org 

[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473010068
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 23 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1047 | trunk passed |
   | +1 | compile | 1100 | trunk passed |
   | +1 | checkstyle | 204 | trunk passed |
   | +1 | mvnsite | 338 | trunk passed |
   | +1 | shadedclient | 1295 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 58 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 241 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | -1 | mvninstall | 17 | dist in the patch failed. |
   | -1 | mvninstall | 24 | integration-test in the patch failed. |
   | +1 | compile | 930 | the patch passed |
   | +1 | cc | 930 | the patch passed |
   | +1 | javac | 930 | the patch passed |
   | +1 | checkstyle | 199 | the patch passed |
   | +1 | mvnsite | 282 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 699 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 326 | the patch passed |
   | -1 | javadoc | 48 | hadoop-hdds_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 100 | common in the patch passed. |
   | +1 | unit | 76 | container-service in the patch passed. |
   | +1 | unit | 118 | server-scm in the patch passed. |
   | +1 | unit | 44 | common in the patch passed. |
   | +1 | unit | 31 | dist in the patch passed. |
   | -1 | unit | 847 | integration-test in the patch failed. |
   | +1 | unit | 70 | ozone-manager in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 8626 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOmInit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux c5a38e9be3b0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/diff-javadoc-javadoc-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/testReport/ |
   | Max. process+thread count | 3859 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/13/console |
   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-473009100
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 23 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 982 | trunk passed |
   | +1 | compile | 959 | trunk passed |
   | +1 | checkstyle | 194 | trunk passed |
   | +1 | mvnsite | 323 | trunk passed |
   | +1 | shadedclient | 1244 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | -1 | findbugs | 60 | hadoop-hdds/container-service in trunk has 1 extant 
Findbugs warnings. |
   | +1 | javadoc | 264 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 19 | dist in the patch failed. |
   | -1 | mvninstall | 25 | integration-test in the patch failed. |
   | +1 | compile | 925 | the patch passed |
   | +1 | cc | 925 | the patch passed |
   | +1 | javac | 925 | the patch passed |
   | +1 | checkstyle | 193 | the patch passed |
   | +1 | mvnsite | 307 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist hadoop-ozone/integration-test |
   | +1 | findbugs | 301 | the patch passed |
   | +1 | javadoc | 210 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | common in the patch passed. |
   | -1 | unit | 73 | container-service in the patch failed. |
   | +1 | unit | 114 | server-scm in the patch passed. |
   | +1 | unit | 47 | common in the patch passed. |
   | +1 | unit | 35 | dist in the patch passed. |
   | -1 | unit | 697 | integration-test in the patch failed. |
   | +1 | unit | 57 | ozone-manager in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 8118 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  yamllint  |
   | uname | Linux 11395b071d45 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/branch-findbugs-hadoop-hdds_container-service-warnings.html
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/testReport/ |
   | Max. process+thread count | 3477 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/14/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] vivekratnavel commented on issue #579: HDDS-761. Create S3 subcommand to run S3 related operations

2019-03-14 Thread GitBox
vivekratnavel commented on issue #579: HDDS-761. Create S3 subcommand to run S3 
related operations
URL: https://github.com/apache/hadoop/pull/579#issuecomment-473008126
 
 
   Failures are unrelated to this patch


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #594: HDDS-1246. Add ozone delegation token 
utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#issuecomment-473005533
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1099 | trunk passed |
   | +1 | compile | 102 | trunk passed |
   | +1 | checkstyle | 32 | trunk passed |
   | +1 | mvnsite | 106 | trunk passed |
   | +1 | shadedclient | 738 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 137 | trunk passed |
   | +1 | javadoc | 85 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 93 | the patch passed |
   | +1 | compile | 90 | the patch passed |
   | +1 | javac | 90 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 83 | the patch passed |
   | +1 | shellcheck | 25 | There were no new shellcheck issues. |
   | +1 | shelldocs | 17 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 810 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 154 | the patch passed |
   | +1 | javadoc | 77 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 38 | common in the patch passed. |
   | +1 | unit | 28 | client in the patch passed. |
   | +1 | unit | 91 | ozonefs in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4122 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/594 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  shellcheck  shelldocs  |
   | uname | Linux 3e369e125c30 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/1/testReport/ |
   | Max. process+thread count | 2982 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozonefs 
U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-594/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #585: HDDS-1138. Ozone Client should avoid talking to SCM directly

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #585: HDDS-1138. Ozone Client should avoid 
talking to SCM directly
URL: https://github.com/apache/hadoop/pull/585#issuecomment-472989637
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 984 | trunk passed |
   | +1 | compile | 109 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 128 | trunk passed |
   | +1 | shadedclient | 787 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 169 | trunk passed |
   | +1 | javadoc | 105 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 139 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 105 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 198 | the patch passed |
   | +1 | javadoc | 95 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 27 | client in the patch passed. |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | +1 | unit | 27 | objectstore-service in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3964 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-585/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/585 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux fc8ac8483089 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-585/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-585/2/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-585/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream Specification

2019-03-14 Thread GitBox
ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream 
Specification
URL: https://github.com/apache/hadoop/pull/575#discussion_r265692719
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/StreamStateModel.java
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import com.google.common.base.Preconditions;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIOException;
+
+import static org.apache.hadoop.fs.FSExceptionMessages.STREAM_IS_CLOSED;
+
+/**
+ * Models a stream's state and can be used for checking this state before
+ * any operation.
+ *
+ * The model has three states: Open, Error, and Closed,
+ *
+ * 
+ *   Open: caller can interact with the stream.
+ *   Error: all operations will raise the previously recorded exception.
+ *   Closed: operations will be rejected.
+ * 
+ */
+public class StreamStateModel {
+
+  /**
+   * States of the stream.
+   */
+  public enum State {
+
+/**
+ * Stream is open.
+ */
+Open,
+
+/**
+ * Stream is in an error state.
+ * It is not expected to recover from this.
+ */
+Error,
+
+/**
+ * Stream is now closed. Operations will fail.
+ */
+Closed
+  }
+
+  /**
+   * Path; if not empty then a {@link PathIOException} will be raised
+   * containing this path.
+   */
+  private final String path;
 
 Review comment:
   minor nit: /will be raised/will be added to any exceptions raised/.. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream Specification

2019-03-14 Thread GitBox
ajfabbri commented on a change in pull request #575: HADOOP-13327 Output Stream 
Specification
URL: https://github.com/apache/hadoop/pull/575#discussion_r265691746
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/StreamStateModel.java
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import com.google.common.base.Preconditions;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIOException;
+
+import static org.apache.hadoop.fs.FSExceptionMessages.STREAM_IS_CLOSED;
+
+/**
+ * Models a stream's state and can be used for checking this state before
+ * any operation.
+ *
+ * The model has three states: Open, Error, and Closed,
+ *
+ * 
+ *   Open: caller can interact with the stream.
+ *   Error: all operations will raise the previously recorded exception.
+ *   Closed: operations will be rejected.
+ * 
+ */
+public class StreamStateModel {
+
+  /**
+   * States of the stream.
+   */
+  public enum State {
+
+/**
+ * Stream is open.
+ */
+Open,
+
+/**
+ * Stream is in an error state.
+ * It is not expected to recover from this.
+ */
+Error,
+
+/**
+ * Stream is now closed. Operations will fail.
+ */
+Closed
+  }
+
+  /**
+   * Path; if not empty then a {@link PathIOException} will be raised
+   * containing this path.
+   */
+  private final String path;
+
+  /** Lock. Not considering an InstrumentedWriteLock, but it is an option. */
+  private final Lock lock = new ReentrantLock();
+
+  /**
+   * Initial state: open.
+   * This is volatile: it can be queried without encountering any locks.
+   * However, to guarantee the state is constant through the life of an
+   * operation, updates must be through the synchronized methods.
+   */
+  private volatile State state = State.Open;
+
+  /** Any exception to raise on the next checkOpen call. */
+  private IOException exception;
+
+  public StreamStateModel(final Path path) {
+this.path = path.toString();
+  }
+
+  public StreamStateModel(final String path) {
+this.path = path;
+  }
+
+  /**
+   * Get the current state.
+   * Not synchronized; lock if you want consistency across calls.
+   * @return the current state.
+   */
+  public State getState() {
+return state;
+  }
+
+  /**
+   * Change state to closed. No-op if the state was in closed or error
+   * @return true if the state changed.
+   */
+  public synchronized boolean enterClosedState() {
+if (state == State.Open) {
+  state = State.Closed;
+  return true;
+} else {
+  return false;
+}
+  }
+
+  /**
+   * Change state to error and stores first error so it can be re-thrown.
+   * If already in error: return previous exception.
+   * @param ex the exception to record
+   * @return the exception set when the error state was entered.
 
 Review comment:
   Hi Sean! Current code, yes, but would we ever actually need that 
information? (Maybe in a unit test). It is going to be in error when the call 
returns either way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16186) NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-14 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792902#comment-16792902
 ] 

Gabor Bota commented on HADOOP-16186:
-

{quote}There's clearly some codepath which can surface which is causing 
failures in some situations, and having multiple patches switching between the 
&& and || operators isn't going to to fix it
{quote}
You are right, but we need the find a way to reproduce it and write a test for 
it - find that code path.

I cannot reproduce the issue simply by running the tests. 

> NPE in ITestS3AFileSystemContract teardown in  
> DynamoDBMetadataStore.lambda$listChildren
> 
>
> Key: HADOOP-16186
> URL: https://issues.apache.org/jira/browse/HADOOP-16186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> Test run options. NPE in test teardown
> {code}
> -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb
> {code}
> If you look at the code, its *exactly* the place fixed in HADOOP-15827, a 
> change which HADOOP-15947 reverted. 
> There's clearly some codepath which can surface which is causing failures in 
> some situations, and having multiple patches switching between the && and || 
> operators isn't going to to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16192) CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and doesn't update backoff when refreshed

2019-03-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16192:
-
Attachment: HADOOP-16192.000.patch

> CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and 
> doesn't update backoff when refreshed
> --
>
> Key: HADOOP-16192
> URL: https://issues.apache.org/jira/browse/HADOOP-16192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16192.000.patch
>
>
> The {{CallQueueManager}} has a mechanism to enforce backoff when various 
> criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
> However, it currently only checks for these backoff criteria when {{put()}} 
> is used; {{add()}} calls are passed directly to the underlying call queue. It 
> should check the backoff criteria for either call.
> Also, when {{refreshCallQueue()}} is called, the backoff configuration is not 
> refreshed. This should be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16192) CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and doesn't update backoff when refreshed

2019-03-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16192:
-
Description: 
The {{CallQueueManager}} has a mechanism to enforce backoff when various 
criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
However, it currently only checks for these backoff criteria when {{put()}} is 
used; {{add()}} calls are passed directly to the underlying call queue. It 
should check the backoff criteria for either call.

Also, when {{refreshCallQueue()}} is called, the backoff configuration is not 
refreshed. This should be updated as well.

  was:The {{CallQueueManager}} has a mechanism to enforce backoff when various 
criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
However, it currently only checks for these backoff criteria when {{put()}} is 
used; {{add()}} calls are passed directly to the underlying call queue. It 
should check the backoff criteria for either call.


> CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and 
> doesn't update backoff when refreshed
> --
>
> Key: HADOOP-16192
> URL: https://issues.apache.org/jira/browse/HADOOP-16192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The {{CallQueueManager}} has a mechanism to enforce backoff when various 
> criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
> However, it currently only checks for these backoff criteria when {{put()}} 
> is used; {{add()}} calls are passed directly to the underlying call queue. It 
> should check the backoff criteria for either call.
> Also, when {{refreshCallQueue()}} is called, the backoff configuration is not 
> refreshed. This should be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16192) CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and doesn't update backoff when refreshed

2019-03-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16192:
-
Status: Patch Available  (was: In Progress)

> CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and 
> doesn't update backoff when refreshed
> --
>
> Key: HADOOP-16192
> URL: https://issues.apache.org/jira/browse/HADOOP-16192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HADOOP-16192.000.patch
>
>
> The {{CallQueueManager}} has a mechanism to enforce backoff when various 
> criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
> However, it currently only checks for these backoff criteria when {{put()}} 
> is used; {{add()}} calls are passed directly to the underlying call queue. It 
> should check the backoff criteria for either call.
> Also, when {{refreshCallQueue()}} is called, the backoff configuration is not 
> refreshed. This should be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16192) CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and doesn't update backoff when refreshed

2019-03-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16192:
-
Summary: CallQueue backoff bug fixes: doesn't perform backoff when add() is 
used, and doesn't update backoff when refreshed  (was: CallQueue doesn't 
perform backoff when add() is used)

> CallQueue backoff bug fixes: doesn't perform backoff when add() is used, and 
> doesn't update backoff when refreshed
> --
>
> Key: HADOOP-16192
> URL: https://issues.apache.org/jira/browse/HADOOP-16192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The {{CallQueueManager}} has a mechanism to enforce backoff when various 
> criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
> However, it currently only checks for these backoff criteria when {{put()}} 
> is used; {{add()}} calls are passed directly to the underlying call queue. It 
> should check the backoff criteria for either call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16192) CallQueue doesn't perform backoff when add() is used

2019-03-14 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16192 started by Erik Krogen.

> CallQueue doesn't perform backoff when add() is used
> 
>
> Key: HADOOP-16192
> URL: https://issues.apache.org/jira/browse/HADOOP-16192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> The {{CallQueueManager}} has a mechanism to enforce backoff when various 
> criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
> However, it currently only checks for these backoff criteria when {{put()}} 
> is used; {{add()}} calls are passed directly to the underlying call queue. It 
> should check the backoff criteria for either call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16192) CallQueue doesn't perform backoff when add() is used

2019-03-14 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-16192:


 Summary: CallQueue doesn't perform backoff when add() is used
 Key: HADOOP-16192
 URL: https://issues.apache.org/jira/browse/HADOOP-16192
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Erik Krogen
Assignee: Erik Krogen


The {{CallQueueManager}} has a mechanism to enforce backoff when various 
criteria are met, as defined by the implementation of {{RpcScheduler}} used. 
However, it currently only checks for these backoff criteria when {{put()}} is 
used; {{add()}} calls are passed directly to the underlying call queue. It 
should check the backoff criteria for either call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #608: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #608: HDDS-1284. Adjust default values of 
pipline recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/608#issuecomment-472963511
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1008 | trunk passed |
   | +1 | compile | 49 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 708 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 69 | trunk passed |
   | +1 | javadoc | 38 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 40 | the patch passed |
   | +1 | compile | 34 | the patch passed |
   | +1 | javac | 34 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 78 | the patch passed |
   | +1 | javadoc | 40 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 61 | common in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3119 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-608/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/608 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux da87f73cf658 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-608/1/testReport/ |
   | Max. process+thread count | 441 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-608/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #585: HDDS-1138. Ozone Client should avoid talking to SCM directly

2019-03-14 Thread GitBox
xiaoyuyao commented on issue #585: HDDS-1138. Ozone Client should avoid talking 
to SCM directly
URL: https://github.com/apache/hadoop/pull/585#issuecomment-472959581
 
 
   Rebase after the following two conflicting changes:
   HDDS-1220. KeyManager#openKey should release the bucket lock before doing an 
allocateBlock. Contributed by Lokesh Jain.
   HDDS-1095. OzoneManager#openKey should do multiple block allocations in a 
single SCM rpc call. Contributed by Mukul Kumar Singh.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265665773
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/O3fsDtFetcher.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.DtFetcher;
+import org.apache.hadoop.security.token.Token;
+
+
+/**
+ * A DT fetcher for OzoneFileSystem.
+ * It is only needed for the `hadoop dtutil` command.
+ */
+public class O3fsDtFetcher implements DtFetcher {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(O3fsDtFetcher.class);
+
+  private static final String SERVICE_NAME = OzoneConsts.OZONE_URI_SCHEME;
+
+  private static final String FETCH_FAILED =
+  "Fetch ozone delegation token failed";
+
+  /**
+   * Returns the service name for O3fs, which is also a valid URL prefix.
+   */
+  public Text getServiceName() {
+return new Text(SERVICE_NAME);
+  }
+
+  public boolean isTokenRequired() {
 
 Review comment:
   You mean isTokenRequired?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265665546
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -576,7 +576,7 @@ message OMTokenProto {
 optional uint32 sequenceNumber = 7;
 optional uint32 masterKeyId= 8;
 optional uint64 expiryDate = 9;
-required string omCertSerialId = 10;
+optional string omCertSerialId = 10;
 
 Review comment:
   Agree, this PR was created before that change. This will be fixed in next 
push after rebase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265665217
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneDelegationTokenSelector.java
 ##
 @@ -44,7 +44,9 @@ public OzoneDelegationTokenSelector() {
   public Token selectToken(Text service,
   Collection> tokens) {
 LOG.trace("Getting token for service {}", service);
-return super.selectToken(service, tokens);
+Token token = super.selectToken(service, tokens);
+LOG.info("Got tokens: {} for service {}", token, service);
 
 Review comment:
   Yes, I will definitely change this to DEBUG.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265665005
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
 ##
 @@ -84,16 +86,23 @@ public OMFailoverProxyProvider(OzoneConfiguration 
configuration,
   public final class OMProxyInfo
   extends FailoverProxyProvider.ProxyInfo {
 private InetSocketAddress address;
+private Text dtService;
 
 Review comment:
   The dtService will be a combined URI format that contains both instances' 
ip:port. The token selector will be updated to accept.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265664265
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -445,7 +444,18 @@ public void removeBucketAcls(
   @Override
   public Token getDelegationToken(Text renewer)
   throws IOException {
-return ozoneManagerClient.getDelegationToken(renewer);
+
+Token token =
+ozoneManagerClient.getDelegationToken(renewer);
+if (token != null) {
+  Text dtService =
+  getOMProxyProvider().getProxy().getDelegationTokenService();
+  token.setService(dtService);
 
 Review comment:
   We will revisit this when full OM HA is implemented. For 0.4 we can assume 
this will always return the first proxy. We will have to do this because the HA 
code has been partially implemented in trunk. We don't want to work two 
different versions that are difficult to merge later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.

2019-03-14 Thread GitBox
xiaoyuyao commented on a change in pull request #594: HDDS-1246. Add ozone 
delegation token utility subcmd for Ozone CLI. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/594#discussion_r265663195
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -445,7 +444,18 @@ public void removeBucketAcls(
   @Override
   public Token getDelegationToken(Text renewer)
   throws IOException {
-return ozoneManagerClient.getDelegationToken(renewer);
+
+Token token =
+ozoneManagerClient.getDelegationToken(renewer);
+if (token != null) {
+  Text dtService =
+  getOMProxyProvider().getProxy().getDelegationTokenService();
+  token.setService(dtService);
+  LOG.info("Created " + token.toString());
+} else {
+  LOG.info("Cannot get ozone delegation token from " + renewer);
 
 Review comment:
   will fix in the next push.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #607: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #607: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/607#issuecomment-472953344
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1012 | trunk passed |
   | +1 | compile | 35 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 741 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 45 | trunk passed |
   | +1 | javadoc | 27 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 49 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 280 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3256 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-607/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/607 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 143ba7eca545 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-607/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-607/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: returned statuses of copyFile and copyDirectory are ignored

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Summary: AliyunOSS: returned statuses of copyFile and copyDirectory are 
ignored  (was: AliyunOSS: returned status of copyFile and copyDirectory are 
ignored)

> AliyunOSS: returned statuses of copyFile and copyDirectory are ignored
> --
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-472943946
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1001 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 717 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 43 | trunk passed |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3169 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/606 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b3971a01fcbb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: returned status of copyFile and copyDirectory are ignored

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Description: 
Returned status of copyFile and copyDirectory are ignored. It's OK most of the 
time.
{code:java}
if (srcStatus.isDirectory()) {
copyDirectory(srcPath, dstPath);
} else {
copyFile(srcPath, srcStatus.getLen(), dstPath);
}

return srcPath.equals(dstPath) || delete(srcPath, true);{code}
However, oss fs can not catch errors when rename from one dir to another if the 
src dir is being deleted. 

> AliyunOSS: returned status of copyFile and copyDirectory are ignored
> 
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>
> Returned status of copyFile and copyDirectory are ignored. It's OK most of 
> the time.
> {code:java}
> if (srcStatus.isDirectory()) {
> copyDirectory(srcPath, dstPath);
> } else {
> copyFile(srcPath, srcStatus.getLen(), dstPath);
> }
>   
> return srcPath.equals(dstPath) || delete(srcPath, true);{code}
> However, oss fs can not catch errors when rename from one dir to another if 
> the src dir is being deleted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16191) AliyunOSS: returned status of copyFile and copyDirectory are ignored

2019-03-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-16191:
-
Issue Type: Sub-task  (was: Task)
Parent: HADOOP-13377

> AliyunOSS: returned status of copyFile and copyDirectory are ignored
> 
>
> Key: HADOOP-16191
> URL: https://issues.apache.org/jira/browse/HADOOP-16191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.3.0, 3.1.2
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16191) AliyunOSS: returned status of copyFile and copyDirectory are ignored

2019-03-14 Thread wujinhu (JIRA)
wujinhu created HADOOP-16191:


 Summary: AliyunOSS: returned status of copyFile and copyDirectory 
are ignored
 Key: HADOOP-16191
 URL: https://issues.apache.org/jira/browse/HADOOP-16191
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/oss
Affects Versions: 3.1.2, 3.0.3, 2.9.2, 2.10.0, 3.3.0
Reporter: wujinhu
Assignee: wujinhu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #608: HDDS-1284. Adjust default values of pipline recovery for more resilient service restart

2019-03-14 Thread GitBox
elek opened a new pull request #608: HDDS-1284. Adjust default values of 
pipline recovery for more resilient service restart
URL: https://github.com/apache/hadoop/pull/608
 
 
   As of now we have a following algorithm to handle node failures:
   
   1. In case of a missing node the leader of the pipline or the scm can 
detected the missing heartbeats.
   2. SCM will start to close the pipeline (CLOSING state) and try to close the 
containers with the remaining nodes in the pipeline
   3. After 5 minutes the pipeline will be destroyed (CLOSED) and a new 
pipeline can be created from the healthy nodes (one node can be part only one 
pipwline in the same time).
   
   While this algorithm can work well with a big cluster it doesn't provide 
very good usability on small clusters:
   
   Use case1:
   
   Given 3 nodes, in case of a service restart, if the restart takes more than 
90s, the pipline will be moved to the CLOSING state. For the next 5 minutes 
(ozone.scm.pipeline.destroy.timeout) the container will remain in the CLOSING 
state. As there are no more nodes and we can't assign the same node to two 
different pipeline, the cluster will be unavailable for 5 minutes.
   
   Use case2:
   
   Given 90 nodes and 30 pipelines where all the pipelines are spread across 3 
racks. Let's stop one rack. As all the pipelines are affected, all the 
pipelines will be moved to the CLOSING state. We have no free nodes, therefore 
we need to wait for 5 minutes to write any data to the cluster.
   
   These problems can be solved in multiple ways:
   
   1.) Instead of waiting 5 minutes, destroy the pipeline when all the 
containers are reported to be closed. (Most of the time it's enough, but some 
container report can be missing)
   2.) Support multi-raft and open a pipeline as soon as we have enough nodes 
(even if the nodes already have a CLOSING pipelines).
   
   Both the options require more work on the pipeline management side. For 
0.4.0 we can adjust the following parameters to get better user experience:
   
   {code}
 
   ozone.scm.pipeline.destroy.timeout
   60s
   OZONE, SCM, PIPELINE
   
 Once a pipeline is closed, SCM should wait for the above configured 
time
 before destroying a pipeline.
   
   
 
   ozone.scm.stale.node.interval
   90s
   OZONE, MANAGEMENT
   
 The interval for stale node flagging. Please
 see ozone.scm.heartbeat.thread.interval before changing this value.
   
 
{code}
   
   First of all, we can be more optimistic and mark node to stale only after 5 
mins instead of 90s. 5 mins should be enough most of the time to recover the 
nodes.
   
   Second: we can decrease the time of ozone.scm.pipeline.destroy.timeout. 
Ideally the close command is sent by the scm to the datanode with a HB. Between 
two HB we have enough time to close all the containers via ratis. With the next 
HB, datanode can report the successful datanode. (If the containers can be 
closed the scm can manage the QUASI_CLOSED containers)
   
   We need to wait 29 seconds (worst case) for the next HB, and 29+30 seconds 
for the confirmation. --> 66 seconds seems to be a safe choice (assuming that 6 
seconds is enough to process the report about the successful closing)
   
   See: https://issues.apache.org/jira/browse/HDDS-1284


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #601: HDDS-1119. DN get OM certificate from SCM 
CA for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-472936643
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/601 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/601 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-601/11/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #605: HDDS-1283. Fix the dynamic documentation of basic s3 client usage

2019-03-14 Thread GitBox
hadoop-yetus commented on issue #605: HDDS-1283. Fix the dynamic documentation 
of basic s3 client usage
URL: https://github.com/apache/hadoop/pull/605#issuecomment-472929153
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 996 | trunk passed |
   | +1 | compile | 51 | trunk passed |
   | +1 | mvnsite | 27 | trunk passed |
   | +1 | shadedclient | 1680 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | s3gateway in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2762 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-605/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/605 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  |
   | uname | Linux a2034f8c818c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d60673c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-605/1/testReport/ |
   | Max. process+thread count | 444 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-605/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #607: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-14 Thread GitBox
bgaborg opened a new pull request #607: HADOOP-15999. S3Guard: Better support 
for out-of-band operations
URL: https://github.com/apache/hadoop/pull/607
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16189) S3A copy/rename of large files to be parallelized as a multipart operation

2019-03-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16189:
---

 Summary: S3A copy/rename of large files to be parallelized as a 
multipart operation
 Key: HADOOP-16189
 URL: https://issues.apache.org/jira/browse/HADOOP-16189
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


AWS docs on 
[copying|https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsUsingAPIs.html]

* file < 5GB, can do this as a single operation
* file > 5GB you MUST use multipart API.

But even for files < 5GB, that's a really slow operation. And if HADOOP-16188 
is to be believed, there's not enough retrying.
Even if the transfer manager does swtich to multipart copies at some size, just 
as we do our writes in 32-64 MB blocks, we can do the same for file copy. 
Something like

{code}
l = len(src)
if L < fs.s3a.block.size: 
   single copy
else: 
  split file by blocks, initiate the upload, then execute each block copy as an 
operation in the S3A thread pool; once all done: complete the operation.
{code}

+ do retries on individual blocks copied, so a failure of one to copy doesn't 
force retry of the whole upload.

This is potentially more complex than it sounds, as 
* there's the need to track the ongoing copy operational state
* handle failures (abort, etc)
* use the if-modified/version headers to fail fast if the source file changes 
partway through copy
* if the len(file)/fs.s3a.block.size >  max-block-count, use a bigger block size
* Maybe need to fall back to the classic operation

Overall, what sounds simple could get complex fast, or at least a bigger piece 
of code. Needs to have some PoC of speedup before attempting



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792794#comment-16792794
 ] 

Steve Loughran commented on HADOOP-15999:
-

bq.  (note to myself: if I use low ddb.table.capacity.read and forget to 
modify it on the dashboard tests will timeout and fail)

try switching to PAYG capacity. I have; there's a few tests we need to fix, but 
otherwise all seems well. Regarding the tests timing out *rather than failing*, 
I consider that a success of HADOOP-15426: no matter how overloaded things are, 
your client shouldn't fail

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, HADOOP-15999.008.patch, 
> HADOOP-15999.009.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-14 Thread GitBox
steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-472916788
 
 
   Initial PR: untested


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-14 Thread GitBox
steveloughran opened a new pull request #606: HADOOP-16190. S3A copyFile 
operation to include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606
 
 
   HADOOP-16190. S3A copyFile operation to include source versionID or etag in 
the copy request
   
   This patch adds the constraints on the request, and maps a 412 response to a 
RemoteFileChangedException.
   
   No obvious test for this. The way to do it would be to get an invalid 
etag/version in to the request and see what happens, which would complicate the 
copy API a bit -but is something we will need for etag/version tracking in 
s3guard anyway
   
   Change-Id: I4b229336ba2d57018bd8b66888b807074419598e


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >