[GitHub] [hadoop] hadoop-yetus commented on issue #1408: HADOOP-13363. Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1408: HADOOP-13363. Upgrade protobuf from 
2.5.0 to something newer
URL: https://github.com/apache/hadoop/pull/1408#issuecomment-530704890
 
 
   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1408/6/console in case 
of problems.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16556) Fix some LGTM alerts

2019-09-12 Thread Malcolm Taylor (Jira)
Malcolm Taylor created HADOOP-16556:
---

 Summary: Fix some LGTM alerts
 Key: HADOOP-16556
 URL: https://issues.apache.org/jira/browse/HADOOP-16556
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Malcolm Taylor


LGTM analysis of Hadoop has raised some alerts 
([https://lgtm.com/projects/g/apache/hadoop/?mode=tree).] This issue is to fix 
some of the more straightforward ones.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ChenSammi commented on issue #1194: HDDS-1879. Support multiple excluded scopes when choosing datanodes in NetworkTopology

2019-09-12 Thread GitBox
ChenSammi commented on issue #1194: HDDS-1879.  Support multiple excluded 
scopes when choosing datanodes in NetworkTopology
URL: https://github.com/apache/hadoop/pull/1194#issuecomment-530706555
 
 
   Is the build system wrong?  I don't think changed the code which cause the 
compile failure. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928288#comment-16928288
 ] 

Vinayakumar B commented on HADOOP-13363:


bq. Release process: can it be issued by the ASF?
[~stack]/[~anu] any update on this from your end as you already have experience 
in this area?

bq. shading complicates CVE tracking. We need to have a process for listing 
what is shaded. Maybe by creating some manifest file, after agreeing with our 
peer projects what such a manifest could look like
Yes. There is a need of such manifest file. I will check what can be done. May 
be this is applicable for 'hadoop-client-runtime' shading as well. 

bq. at some point soon 2020? we will have to think about making java 9 the 
minimum version for branch-3. At which point we can all embrace java 9 modules. 
I don't want to box us in for maintaining a shaded JAR forever in that world
I didn't get the relation of shaded jar with Java 9 upgrade.  Can you please 
elaborate?

bq. As discussed above, Yetus-update is not required. I think we need to modify 
dev-support/docker/Dockerfile to install the correct version of protocol 
buffers, or protoc maven approach. Sorry for the late reply.
[~aajisaka], Yes, if only protobuf version upgrade, then changes will be in the 
docket file.
But as I explained above, shaded dependency jar can be maintained within 
Hadoop's repo as a submodule activated using a profile. In this case, changes 
in the build step will be required, to build shaded dependency first, before 
executing 'mvn compile' with patch. 
This is because, with patch, there is no "mvn install" executed on root. So 
latest shaded jar will not be available in local repo.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16557) Upgrade protobuf.version to 3.7.1

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16557:
--

 Summary: Upgrade protobuf.version to 3.7.1 
 Key: HADOOP-16557
 URL: https://issues.apache.org/jira/browse/HADOOP-16557
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Bump up the "protobuf.version" to 3.7.1 and ensure all compile is successful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 1m2c3t4 opened a new pull request #1428: HADOOP-16556. Fix some alerts raised by LGTM

2019-09-12 Thread GitBox
1m2c3t4 opened a new pull request #1428: HADOOP-16556. Fix some alerts raised 
by LGTM
URL: https://github.com/apache/hadoop/pull/1428
 
 
   Fix the following alerts:
   
   - `Array index out of bounds` in KerberosName.java
   
   - `Contradictory type checks` in GenericExceptionHandler
   
   - `Missing format argument` in RegistrySecurity and HttpFSExceptionProvider
   
   These alerts are shown in 
https://lgtm.com/projects/g/apache/hadoop/?mode=tree
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16562) [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16562:
---
Summary: [pb-upgrade] Update docker image to have 3.7.1 protoc executable  
(was: Update docker image to have 3.7.1 protoc executable)

> [pb-upgrade] Update docker image to have 3.7.1 protoc executable
> 
>
> Key: HADOOP-16562
> URL: https://issues.apache.org/jira/browse/HADOOP-16562
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Priority: Major
>
> Current docker image is installed with 2.5.0 protobuf executable.
> During the process of upgrading protobuf to 3.7.1, docker needs to have both 
> versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-12 Thread GitBox
steveloughran commented on issue #1208: HADOOP-16423. S3Guard fsck: Check 
metadata consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530748126
 
 
   # +1
   
   As well as all the automated tests, did some manual command line operations.
   * empty args
   * command without -check
   * -check without path
   * against store marked as auth but with incomplete MS
   * after doing an import, same store
   * empty store
   * unguarded store
   
   All outcomes were as expected 
   
   I'm happy with this 
   
   ## Followup
   
   One of the changes with the HADOOP-16430 PR is that we now have an S3A FS 
method `boolean allowAuthoritative(final Path path) ` that takes a path and 
returns true iff its authoritative either by the MS being auth *or* the given 
path being marked as one of the authoritative dirs. I think the validation when 
an authoritative directory is consistent between the metastore and S3 should be 
using this when it wants to highlight an authoritative path is inconsistent. 
   
   This can be a follow-on patch, because as usual it will need more tests, in 
the code, and someone to try out the command line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-09-12 Thread GitBox
hadoop-yetus removed a comment on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-527286087
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1249 | trunk passed |
   | +1 | compile | 1067 | trunk passed |
   | +1 | checkstyle | 152 | trunk passed |
   | +1 | mvnsite | 162 | trunk passed |
   | +1 | shadedclient | 1102 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 117 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 264 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 114 | the patch passed |
   | +1 | compile | 1009 | the patch passed |
   | +1 | javac | 1009 | the patch passed |
   | -0 | checkstyle | 154 | root: The patch generated 1 new + 232 unchanged - 
9 fixed = 233 total (was 241) |
   | +1 | mvnsite | 160 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 36 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 164 | hadoop-common-project/hadoop-common generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   | -1 | findbugs | 34 | hadoop-aws in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 618 | hadoop-common in the patch passed. |
   | +1 | unit | 341 | hadoop-mapreduce-client-core in the patch passed. |
   | -1 | unit | 36 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 54 | The patch does not generate ASF License warnings. |
   | | | 7864 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Null pointer dereference of Globber.fs in new 
org.apache.hadoop.fs.Globber(FileContext, Path, PathFilter, boolean)  
Dereferenced at Globber.java:in new org.apache.hadoop.fs.Globber(FileContext, 
Path, PathFilter, boolean)  Dereferenced at Globber.java:[line 105] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1160 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f91ffafecdff 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 915cbc9 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/diff-checkstyle-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/testReport/ |
   | Max. process+thread count | 1379 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/14/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-09-12 Thread GitBox
hadoop-yetus removed a comment on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-525951689
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1084 | trunk passed |
   | +1 | compile | 1044 | trunk passed |
   | +1 | checkstyle | 190 | trunk passed |
   | +1 | mvnsite | 198 | trunk passed |
   | +1 | shadedclient | 1246 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 76 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 304 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 130 | the patch passed |
   | +1 | compile | 1171 | the patch passed |
   | +1 | javac | 1171 | the patch passed |
   | -0 | checkstyle | 176 | root: The patch generated 1 new + 231 unchanged - 
9 fixed = 232 total (was 240) |
   | +1 | mvnsite | 190 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 832 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | -1 | findbugs | 167 | hadoop-common-project/hadoop-common generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 573 | hadoop-common in the patch passed. |
   | +1 | unit | 347 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 86 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 8301 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Null pointer dereference of Globber.fs in new 
org.apache.hadoop.fs.Globber(FileContext, Path, PathFilter, boolean)  
Dereferenced at Globber.java:in new org.apache.hadoop.fs.Globber(FileContext, 
Path, PathFilter, boolean)  Dereferenced at Globber.java:[line 105] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1160 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux de25fbd5c009 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3e6a016 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/13/artifact/out/diff-checkstyle-root.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/13/artifact/out/new-findbugs-hadoop-common-project_hadoop-common.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/13/testReport/ |
   | Max. process+thread count | 1531 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16562) [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928453#comment-16928453
 ] 

Hudson commented on HADOOP-16562:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17284 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17284/])
HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc (github: 
rev f4f9f0fe4f215e2e1b88b0607102f22388acfe45)
* (edit) dev-support/docker/Dockerfile


> [pb-upgrade] Update docker image to have 3.7.1 protoc executable
> 
>
> Key: HADOOP-16562
> URL: https://issues.apache.org/jira/browse/HADOOP-16562
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> Current docker image is installed with 2.5.0 protobuf executable.
> During the process of upgrading protobuf to 3.7.1, docker needs to have both 
> versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-12 Thread GitBox
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work 
with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530794042
 
 
   As for the unit test failures for the latest commit, I tested that:
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16561) [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16561:
--

 Summary: [MAPREDUCE] use protobuf-maven-plugin to generate 
protobuf classes
 Key: HADOOP-16561
 URL: https://issues.apache.org/jira/browse/HADOOP-16561
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16560) [YARN] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16560:
--

 Summary: [YARN] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16560
 URL: https://issues.apache.org/jira/browse/HADOOP-16560
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1429: HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1429: HADOOP-16562. [pb-upgrade] Update docker 
image to have 3.7.1 protoc executable
URL: https://github.com/apache/hadoop/pull/1429#issuecomment-530735086
 
 
   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1429/1/console in case 
of problems.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16562) [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B reassigned HADOOP-16562:
--

Assignee: Vinayakumar B

> [pb-upgrade] Update docker image to have 3.7.1 protoc executable
> 
>
> Key: HADOOP-16562
> URL: https://issues.apache.org/jira/browse/HADOOP-16562
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> Current docker image is installed with 2.5.0 protobuf executable.
> During the process of upgrading protobuf to 3.7.1, docker needs to have both 
> versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16562) [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16562:
---
Status: Patch Available  (was: Open)

> [pb-upgrade] Update docker image to have 3.7.1 protoc executable
> 
>
> Key: HADOOP-16562
> URL: https://issues.apache.org/jira/browse/HADOOP-16562
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Priority: Major
>
> Current docker image is installed with 2.5.0 protobuf executable.
> During the process of upgrading protobuf to 3.7.1, docker needs to have both 
> versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1407: HADOOP-16490. Avoid/handle cached 404s during S3A file creation

2019-09-12 Thread GitBox
steveloughran commented on issue #1407: HADOOP-16490. Avoid/handle cached 404s 
during S3A file creation
URL: https://github.com/apache/hadoop/pull/1407#issuecomment-530749745
 
 
   this is now committed -Thanks for the reviews!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1407: HADOOP-16490. Avoid/handle cached 404s during S3A file creation

2019-09-12 Thread GitBox
steveloughran closed pull request #1407: HADOOP-16490. Avoid/handle cached 404s 
during S3A file creation
URL: https://github.com/apache/hadoop/pull/1407
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur edited a comment on issue #1404: HDFS-13660 Copy file till the source file length during distcp

2019-09-12 Thread GitBox
mukund-thakur edited a comment on issue #1404: HDFS-13660 Copy file till the 
source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530764029
 
 
   > yes. It's a change in behaviour, but probably a good one. A test will 
verify that things fail the way we expect -and will continue to do so.
   >
I tried writing the truncate test case but truncate calls always fails 
while copy is happening with error. ( Failed to TRUNCATE_FILE /tmp/source/1/3 
for DFSClient_NONMAPREDUCE_-2094558173_166 on 127.0.0.1 because 
DFSClient_NONMAPREDUCE_-209455817
   3_166 is already the current lease holder.)
   
   > One little concern though: where does that filestatus come from? It's 
created in the mapper just before the operation, right? As if it's created when 
the list of files to copy is created (which I doubt...) then the current code 
handles the case where the file is changed between job schedule and task 
execution -that would be a visible regression, which we would have to worry 
about.
   > 
   The filestatus is created in the mapper. So i think we are good here. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16563) S3Guard fsck: Detect if a directory if authoritative and highlight errors if detected in it

2019-09-12 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-16563:
---

 Summary: S3Guard fsck: Detect if a directory if authoritative and 
highlight errors if detected in it
 Key: HADOOP-16563
 URL: https://issues.apache.org/jira/browse/HADOOP-16563
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota


Followup from HADOOP-16423.

One of the changes with the HADOOP-16430 PR is that we now have an S3A FS 
method boolean allowAuthoritative(final Path path) that takes a path and 
returns true iff its authoritative either by the MS being auth or the given 
path being marked as one of the authoritative dirs. I think the validation when 
an authoritative directory is consistent between the metastore and S3 should be 
using this when it wants to highlight an authoritative path is inconsistent.

This can be a follow-on patch, because as usual it will need more tests, in the 
code, and someone to try out the command line.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 1m2c3t4 commented on issue #474: fix some alerts raised by LGTM

2019-09-12 Thread GitBox
1m2c3t4 commented on issue #474: fix some alerts raised by LGTM
URL: https://github.com/apache/hadoop/pull/474#issuecomment-530711827
 
 
   @aajisaka I have now created HADOOP-16556. I'll close this PR and replace it 
with #1428 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 1m2c3t4 closed pull request #474: fix some alerts raised by LGTM

2019-09-12 Thread GitBox
1m2c3t4 closed pull request #474: fix some alerts raised by LGTM
URL: https://github.com/apache/hadoop/pull/474
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng edited a comment on issue #1418: HDDS-2089: Add createPipeline CLI.

2019-09-12 Thread GitBox
timmylicheng edited a comment on issue #1418: HDDS-2089: Add createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418#issuecomment-530719904
 
 
   > LGTM, can you add an acceptance test for the new command?
   
   I've created a new JIRA for adding acceptance test for all Pipeline related 
CLI. https://issues.apache.org/jira/browse/HDDS-2115


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on issue #1418: HDDS-2089: Add createPipeline CLI.

2019-09-12 Thread GitBox
timmylicheng commented on issue #1418: HDDS-2089: Add createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418#issuecomment-530719904
 
 
   > LGTM, can you add an acceptance test for the new command?
   
   I've created a new JIRA for adding acceptance test for all Pipeline related 
CLI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16562) Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16562:
--

 Summary: Update docker image to have 3.7.1 protoc executable
 Key: HADOOP-16562
 URL: https://issues.apache.org/jira/browse/HADOOP-16562
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Current docker image is installed with 2.5.0 protobuf executable.

During the process of upgrading protobuf to 3.7.1, docker needs to have both 
versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-12 Thread GitBox
hadoop-yetus removed a comment on issue #1402: HADOOP-16547. make sure that 
s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-528125833
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 3801 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1779 | trunk passed |
   | +1 | compile | 37 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 871 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 33 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 65 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 908 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | the patch passed |
   | +1 | findbugs | 75 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 94 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 8023 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 07c10077afc9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ae28747 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-12 Thread GitBox
hadoop-yetus removed a comment on issue #1402: HADOOP-16547. make sure that 
s3guard prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-529876473
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1091 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 51 | trunk passed |
   | +1 | shadedclient | 1085 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 97 | trunk passed |
   | 0 | spotbugs | 168 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 156 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 104 | the patch passed |
   | +1 | compile | 74 | the patch passed |
   | +1 | javac | 74 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 110 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 1243 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 56 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 78 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4555 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d5d6e4c7e9cb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c3beeb7 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/2/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng opened a new pull request #1431: HDDS-1569 Support creating multiple pipelines with same datanode

2019-09-12 Thread GitBox
timmylicheng opened a new pull request #1431: HDDS-1569 Support creating 
multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431
 
 
   #HDDS-1569 Use PipelinePlacementPolicy to support creating multiple 
pipelines with same datanode


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1428: HADOOP-16556. Fix some alerts raised by LGTM

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1428: HADOOP-16556. Fix some alerts raised by 
LGTM
URL: https://github.com/apache/hadoop/pull/1428#issuecomment-530755828
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1232 | trunk passed |
   | +1 | compile | 1120 | trunk passed |
   | +1 | checkstyle | 164 | trunk passed |
   | -1 | mvnsite | 41 | hadoop-yarn-common in trunk failed. |
   | +1 | shadedclient | 1148 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | trunk passed |
   | 0 | spotbugs | 48 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | -1 | findbugs | 40 | hadoop-yarn-common in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 28 | hadoop-yarn-common in the patch failed. |
   | +1 | compile | 1087 | the patch passed |
   | +1 | javac | 1087 | the patch passed |
   | -0 | checkstyle | 150 | root: The patch generated 1 new + 123 unchanged - 
1 fixed = 124 total (was 124) |
   | -1 | mvnsite | 39 | hadoop-yarn-common in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 772 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | -1 | findbugs | 40 | hadoop-yarn-common in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 197 | hadoop-auth in the patch passed. |
   | +1 | unit | 64 | hadoop-registry in the patch passed. |
   | -1 | unit | 137 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | unit | 42 | hadoop-yarn-common in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 7342 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServerNoXAttrs |
   |   | hadoop.fs.http.server.TestHttpFSServer |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   |   | hadoop.lib.service.hadoop.TestFileSystemAccessService |
   |   | hadoop.test.TestHFSTestCase |
   |   | hadoop.fs.http.server.TestHttpFSServerNoACLs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1428 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2aea207d8df1 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44850f6 |
   | Default Java | 1.8.0_222 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1428/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   |  Test Results | 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1404: HDFS-13660 Copy file till the source file length during distcp

2019-09-12 Thread GitBox
mukund-thakur commented on a change in pull request #1404: HDFS-13660 Copy file 
till the source file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#discussion_r323663047
 
 

 ##
 File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
 ##
 @@ -444,6 +449,54 @@ private void testCopyingExistingFiles(FileSystem fs, 
CopyMapper copyMapper,
 }
   }
 
+  @Test
+  public void testCopyWhileAppend() throws Exception {
+deleteState();
+mkdirs(SOURCE_PATH + "/1");
+touchFile(SOURCE_PATH + "/1/3");
+CopyMapper copyMapper = new CopyMapper();
+StubContext stubContext = new StubContext(getConfiguration(), null, 0);
+Mapper.Context context =
+stubContext.getContext();
+copyMapper.setup(context);
+final Path path = new Path(SOURCE_PATH + "/1/3");
+int manyBytes = 1;
+appendFile(path, manyBytes);
+ScheduledExecutorService scheduledExecutorService = 
Executors.newSingleThreadScheduledExecutor();
+Runnable task = new Runnable() {
+  public void run() {
+try {
+  int maxAppendAttempts = 20;
+  int appendCount = 0;
+  while (appendCount < maxAppendAttempts) {
+appendFile(path, 1000);
+Thread.sleep(200);
+appendCount++;
+  }
+} catch (IOException | InterruptedException e) {
+  e.printStackTrace();
+}
+  }
+};
+scheduledExecutorService.schedule(task, 10, TimeUnit.MILLISECONDS);
+boolean isFileMismatchErrorPresent = false;
+try {
+  copyMapper.map(new Text(DistCpUtils.getRelativePath(new 
Path(SOURCE_PATH), path)),
+  new CopyListingFileStatus(cluster.getFileSystem().getFileStatus(
+  path)), context);
+} catch (Exception ex) {
+  StringWriter sw = new StringWriter();
+  ex.printStackTrace(new PrintWriter(sw));
 
 Review comment:
ex.printStackTrace(new PrintWriter(sw)); is used to capture the complete 
stacktrace in a string as the message we want to match is a part of nested 
stack trace.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp

2019-09-12 Thread GitBox
mukund-thakur commented on issue #1404: HDFS-13660 Copy file till the source 
file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530764029
 
 
   > yes. It's a change in behaviour, but probably a good one. A test will 
verify that things fail the way we expect -and will continue to do so.
   > I tried writing the truncate test case but truncate calls always fails 
while copy is happening with error. ( Failed to TRUNCATE_FILE /tmp/source/1/3 
for DFSClient_NONMAPREDUCE_-2094558173_166 on 127.0.0.1 because 
DFSClient_NONMAPREDUCE_-209455817
   3_166 is already the current lease holder.)
   
   > One little concern though: where does that filestatus come from? It's 
created in the mapper just before the operation, right? As if it's created when 
the list of files to copy is created (which I doubt...) then the current code 
handles the case where the file is changed between job schedule and task 
execution -that would be a visible regression, which we would have to worry 
about.
   > The filestatus is created in the mapper. So i think we are good here. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1404: HDFS-13660 Copy file till the source file length during distcp

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1404: HDFS-13660 Copy file till the source 
file length during distcp
URL: https://github.com/apache/hadoop/pull/1404#issuecomment-530767466
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 113 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1368 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 32 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 893 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 51 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 49 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-tools/hadoop-distcp: The patch generated 10 
new + 329 unchanged - 1 fixed = 339 total (was 330) |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 921 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | the patch passed |
   | +1 | findbugs | 53 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 982 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4779 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1404 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3e16e24c3409 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44850f6 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/2/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1404/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl edited a comment on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-12 Thread GitBox
smengcl edited a comment on issue #1360: HDDS-2007. Make ozone fs shell command 
work with OM HA service ids 
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530794042
 
 
   As for the unit test failures for the latest commit, I tested that:
   ```
   org.apache.hadoop.fs.ozone.TestOzoneFileSystem: passed locally
   org.apache.hadoop.ozone.TestSecureOzoneCluster: passed locally
   org.apache.hadoop.ozone.client.rpc.TestContainerStateMachineFailures: 
testDelegationToken and testDelegationTokenRenewal failed but unrelated; both 
are failing before this jira.
   org.apache.hadoop.ozone.om.TestSecureOzoneManager: unrelated.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16559) [HDFS] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16559:
--

 Summary: [HDFS] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16559
 URL: https://issues.apache.org/jira/browse/HADOOP-16559
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto file



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] timmylicheng commented on a change in pull request #1418: HDDS-2089: Add createPipeline CLI.

2019-09-12 Thread GitBox
timmylicheng commented on a change in pull request #1418: HDDS-2089: Add 
createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418#discussion_r323611731
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 ##
 @@ -390,10 +390,9 @@ public void 
notifyObjectStageChange(StorageContainerLocationProtocolProtos
   public Pipeline createReplicationPipeline(HddsProtos.ReplicationType type,
   HddsProtos.ReplicationFactor factor, HddsProtos.NodePool nodePool)
   throws IOException {
-// TODO: will be addressed in future patch.
-// This is needed only for debugging purposes to make sure cluster is
-// working correctly.
-return null;
+AUDIT.logReadSuccess(
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on issue #1429: HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread GitBox
brahmareddybattula commented on issue #1429: HADOOP-16562. [pb-upgrade] Update 
docker image to have 3.7.1 protoc executable
URL: https://github.com/apache/hadoop/pull/1429#issuecomment-530769758
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16423) S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)

2019-09-12 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928449#comment-16928449
 ] 

Gabor Bota commented on HADOOP-16423:
-

+1 from [~ste...@apache.org] on PR #1208. Committing.
Created followup jiras:
HADOOP-16564 - docs
HADOOP-16563 - authoritative paths


> S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)
> 
>
> Key: HADOOP-16423
> URL: https://issues.apache.org/jira/browse/HADOOP-16423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> This part is only for logging the inconsistencies.
> This issue only covers the part when the walk is being done in the S3 and 
> compares all metadata to the MS.
> There will be no part where the walk is being done in the MS and compare it 
> to the S3. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1418: HDDS-2089: Add createPipeline CLI.

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1418: HDDS-2089: Add createPipeline CLI.
URL: https://github.com/apache/hadoop/pull/1418#issuecomment-530781902
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 584 | HDDS-1564 passed |
   | +1 | compile | 378 | HDDS-1564 passed |
   | +1 | checkstyle | 81 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | HDDS-1564 passed |
   | 0 | spotbugs | 416 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 608 | HDDS-1564 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | compile | 388 | the patch passed |
   | +1 | javac | 388 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | the patch passed |
   | +1 | findbugs | 658 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 296 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1903 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7715 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1418 |
   | JIRA Issue | HDDS-2089 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f80e9d4d566c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 753fc67 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/2/testReport/ |
   | Max. process+thread count | 4872 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm hadoop-hdds/tools 
U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1418/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16557) [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16557:
---
Assignee: Vinayakumar B
  Status: Patch Available  (was: Open)

> [pb-upgrade] Upgrade protobuf.version to 3.7.1
> --
>
> Key: HADOOP-16557
> URL: https://issues.apache.org/jira/browse/HADOOP-16557
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> Bump up the "protobuf.version" to 3.7.1 and ensure all compile is successful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb opened a new pull request #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread GitBox
vinayakumarb opened a new pull request #1432: HADOOP-16557. [pb-upgrade] 
Upgrade protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #1430: HDDS-2117. ContainerStateMachine#writeStateMachineData times out.

2019-09-12 Thread GitBox
bshashikant opened a new pull request #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430
 
 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16564) S3Guarld fsck: Add docs to the first iteration (S3->ddbMS, -verify)

2019-09-12 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-16564:
---

 Summary: S3Guarld fsck: Add docs to the first iteration 
(S3->ddbMS, -verify)
 Key: HADOOP-16564
 URL: https://issues.apache.org/jira/browse/HADOOP-16564
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota


Followup for HADOOP-16423.
Add md documentation and how to extend wtih new violations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-12 Thread GitBox
bgaborg merged pull request #1208: HADOOP-16423. S3Guard fsck: Check metadata 
consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST

2019-09-12 Thread luhuachao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928386#comment-16928386
 ] 

luhuachao commented on HADOOP-16069:


[~tasanuma] thanks for reply

 one line code for substitute from '_HOST' to real hostname 
{code:java}
principal = SecurityUtil.getServerPrincipal(principal, "");
{code}
and UT already exits for this method. 

Confused if it is necessary to add the UT ?

> Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in 
> ZKDelegationTokenSecretManager using principal with Schema /_HOST
> 
>
> Key: HADOOP-16069
> URL: https://issues.apache.org/jira/browse/HADOOP-16069
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Minor
>  Labels: RBF, kerberos
> Attachments: HADOOP-16069.001.patch
>
>
> when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure 
> ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we 
> have to use principal like 'nn/hostn...@example.com' .



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1429: HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1429: HADOOP-16562. [pb-upgrade] Update docker 
image to have 3.7.1 protoc executable
URL: https://github.com/apache/hadoop/pull/1429#issuecomment-530746447
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | shadedclient | 862 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | hadolint | 3 | There were no new hadolint issues. |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 852 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 1942 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1429/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1429 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux 70fb5cb64b52 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44850f6 |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1429/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16423) S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)

2019-09-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928447#comment-16928447
 ] 

Hudson commented on HADOOP-16423:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17283 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17283/])
HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and (github: 
rev 4e273a31f66013b7c20e8114451f5bc6c741f2cc)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java


> S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)
> 
>
> Key: HADOOP-16423
> URL: https://issues.apache.org/jira/browse/HADOOP-16423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> This part is only for logging the inconsistencies.
> This issue only covers the part when the walk is being done in the S3 and 
> compares all metadata to the MS.
> There will be no part where the walk is being done in the MS and compare it 
> to the S3. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-12 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-16565:
---

 Summary: Fix "com.amazonaws.SdkClientException: Unable to find a 
region via the region provider chain."
 Key: HADOOP-16565
 URL: https://issues.apache.org/jira/browse/HADOOP-16565
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota
Assignee: Gabor Bota


The error found during testing in the following tests:
{noformat}
[ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
Unable to f...
[ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find a 
region v...
[ERROR]   
ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
 ? SdkClient
[ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
Unable to ...
[ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
SdkClient Unabl...
[ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
Unable to ...
[ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
SdkClient ...
[ERROR]   
ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
 ? SdkClient
[ERROR]   
ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
 ? SdkClient
[ERROR]   
ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
 ? SdkClient
[ERROR]   
ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
 ? SdkClient
[ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 ? 
SdkClient
[ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
SdkClient Unab...
[ERROR]   
ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
 ? SdkClient
[ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
SdkClient Una...
[ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
Unable to find...
[ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
Unable to find...
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16558) [COMMON] use protobuf-maven-plugin to generate protobuf classes

2019-09-12 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16558:
--

 Summary: [COMMON] use protobuf-maven-plugin to generate protobuf 
classes
 Key: HADOOP-16558
 URL: https://issues.apache.org/jira/browse/HADOOP-16558
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Reporter: Vinayakumar B


Use "protoc-maven-plugin" to dynamically download protobuf executable to 
generate protobuf classes from proto files.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb opened a new pull request #1429: HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread GitBox
vinayakumarb opened a new pull request #1429: HADOOP-16562. [pb-upgrade] Update 
docker image to have 3.7.1 protoc executable
URL: https://github.com/apache/hadoop/pull/1429
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1430: HDDS-2117. ContainerStateMachine#writeStateMachineData times out.

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1430: HDDS-2117. 
ContainerStateMachine#writeStateMachineData times out.
URL: https://github.com/apache/hadoop/pull/1430#issuecomment-530754957
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 844 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 161 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 24 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 3 new + 208 
unchanged - 0 fixed = 211 total (was 208) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 26 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 253 | hadoop-hdds in the patch passed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2960 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1430 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9485fb67655c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44850f6 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/testReport/ |
   | Max. process+thread count | 461 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1430/1/console |
   | versions | 

[GitHub] [hadoop] bgaborg commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log)

2019-09-12 Thread GitBox
bgaborg commented on issue #1208: HADOOP-16423. S3Guard fsck: Check metadata 
consistency between S3 and metadatastore (log)
URL: https://github.com/apache/hadoop/pull/1208#issuecomment-530776300
 
 
   Created followup: https://issues.apache.org/jira/browse/HADOOP-16563
   Committing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard prune sets up the FS

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1402: HADOOP-16547. make sure that s3guard 
prune sets up the FS
URL: https://github.com/apache/hadoop/pull/1402#issuecomment-530779279
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 155 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1532 | trunk passed |
   | +1 | compile | 41 | trunk passed |
   | +1 | checkstyle | 38 | trunk passed |
   | +1 | mvnsite | 49 | trunk passed |
   | +1 | shadedclient | 1002 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 35 | trunk passed |
   | 0 | spotbugs | 76 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 70 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 41 | the patch passed |
   | +1 | compile | 35 | the patch passed |
   | +1 | javac | 35 | the patch passed |
   | +1 | checkstyle | 23 | the patch passed |
   | +1 | mvnsite | 40 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 993 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | the patch passed |
   | +1 | findbugs | 74 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 88 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4382 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 599076409e47 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44850f6 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/3/testReport/ |
   | Max. process+thread count | 323 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1402/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb merged pull request #1429: HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread GitBox
vinayakumarb merged pull request #1429: HADOOP-16562. [pb-upgrade] Update 
docker image to have 3.7.1 protoc executable
URL: https://github.com/apache/hadoop/pull/1429
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16557) [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16557:
---
Summary: [pb-upgrade] Upgrade protobuf.version to 3.7.1  (was: Upgrade 
protobuf.version to 3.7.1 )

> [pb-upgrade] Upgrade protobuf.version to 3.7.1
> --
>
> Key: HADOOP-16557
> URL: https://issues.apache.org/jira/browse/HADOOP-16557
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Priority: Major
>
> Bump up the "protobuf.version" to 3.7.1 and ensure all compile is successful.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade 
protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#issuecomment-530788673
 
 
   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/console in case 
of problems.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple 
pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-530799657
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 595 | HDDS-1564 passed |
   | +1 | compile | 375 | HDDS-1564 passed |
   | +1 | checkstyle | 76 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 852 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | HDDS-1564 passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 613 | HDDS-1564 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | compile | 390 | the patch passed |
   | +1 | javac | 390 | the patch passed |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 674 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | the patch passed |
   | +1 | findbugs | 630 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 302 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1894 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7797 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.safemode.TestSCMSafeModeManager |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestroy |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 770a39ed0235 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 753fc67 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/1/testReport/ |
   | Max. process+thread count | 5406 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message 

[GitHub] [hadoop] hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command 
work with OM HA service ids   
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530821490
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 25 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 21 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 165 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 28 | hadoop-ozone in trunk failed. |
   | -0 | patch | 199 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 30 | hadoop-ozone in the patch failed. |
   | -1 | javac | 30 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 41 new + 752 
unchanged - 17 fixed = 793 total (was 769) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 248 | hadoop-hdds in the patch passed. |
   | -1 | unit | 33 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 3130 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1360 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 8e964b1439e5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f4f9f0f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Updated] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16566:

Affects Version/s: 3.3.0

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-16566:
---

 Summary: S3Guard fsck: Use org.apache.hadoop.util.StopWatch 
instead of com.google.common.base.Stopwatch
 Key: HADOOP-16566
 URL: https://issues.apache.org/jira/browse/HADOOP-16566
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota
Assignee: Gabor Bota


Some distributions won't have the updated guava, and 
{{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16566:

Component/s: fs/s3

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1360: HDDS-2007. Make ozone fs shell command 
work with OM HA service ids   
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530840432
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 10 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. |
   | -1 | compile | 24 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 15 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 17 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 184 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   | -0 | patch | 212 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 42 new + 752 
unchanged - 17 fixed = 794 total (was 769) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 16 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 25 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 285 | hadoop-hdds in the patch passed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3301 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1360 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux bc3ae6627f85 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f4f9f0f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1360/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 

[jira] [Updated] (HADOOP-16562) [pb-upgrade] Update docker image to have 3.7.1 protoc executable

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-16562:
---
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> [pb-upgrade] Update docker image to have 3.7.1 protoc executable
> 
>
> Key: HADOOP-16562
> URL: https://issues.apache.org/jira/browse/HADOOP-16562
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Current docker image is installed with 2.5.0 protobuf executable.
> During the process of upgrading protobuf to 3.7.1, docker needs to have both 
> versions for yetus to verify.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work with OM HA service ids

2019-09-12 Thread GitBox
smengcl commented on issue #1360: HDDS-2007. Make ozone fs shell command work 
with OM HA service ids
URL: https://github.com/apache/hadoop/pull/1360#issuecomment-530800476
 
 
   @arp7 @bharatviswa504 Some acceptance tests are failing because the 
initialization of an `OzoneConfiguration` object in `BasicOzoneFileSystem` in 
commit 
https://github.com/apache/hadoop/pull/1360/commits/b31fd0b20a436fbc6f4028c49d8e0fd04ead53f3.
   
   The 
[log](https://elek.github.io/ozone-ci/pr/pr-hdds-2007-4rjxl/acceptance/summary.html#s1-s1-t1-k2-k2)
 shows:
   ```
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/hadoop/hdds/conf/OzoneConfiguration
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2204)
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2169)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2265)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2652)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2665)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:188)
at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:237)
at org.apache.hadoop.fs.shell.Command.run(Command.java:164)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
   Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.hdds.conf.OzoneConfiguration
   ...
   ```
   
   Looks like in the failed smoke tests `BasicOzoneFileSystem` can't load the 
jar that has `OzoneConfiguration`. Any suggestions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade 
protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#issuecomment-530813915
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1356 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 90 | Maven dependency ordering for branch |
   | -1 | mvninstall | 51 | root in trunk failed. |
   | -1 | compile | 34 | root in trunk failed. |
   | +1 | checkstyle | 143 | trunk passed |
   | -1 | mvnsite | 28 | root in trunk failed. |
   | -1 | shadedclient | 47 | branch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 30 | root in trunk failed. |
   | 0 | spotbugs | 16 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 17 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   | -1 | findbugs | 14 | hadoop-hdfs in trunk failed. |
   | -1 | findbugs | 14 | hadoop-hdfs-rbf in trunk failed. |
   | -1 | findbugs | 14 | hadoop-yarn-common in trunk failed. |
   | 0 | findbugs | 16 | branch/hadoop-client-modules/hadoop-client-runtime no 
findbugs output file (findbugsXml.xml) |
   | -1 | findbugs | 70 | root in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | -1 | mvninstall | 14 | hadoop-hdfs in the patch failed. |
   | -1 | mvninstall | 14 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | mvninstall | 14 | hadoop-yarn-common in the patch failed. |
   | -1 | mvninstall | 16 | hadoop-fs2img in the patch failed. |
   | -1 | mvninstall | 39 | root in the patch failed. |
   | -1 | compile | 35 | root in the patch failed. |
   | -1 | javac | 35 | root in the patch failed. |
   | -0 | checkstyle | 137 | root: The patch generated 2 new + 35 unchanged - 1 
fixed = 37 total (was 36) |
   | +1 | hadolint | 3 | There were no new hadolint issues. |
   | -1 | mvnsite | 27 | root in the patch failed. |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 913 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 35 | root in the patch failed. |
   | 0 | findbugs | 19 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 17 | hadoop-hdfs in the patch failed. |
   | -1 | findbugs | 18 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | findbugs | 19 | hadoop-yarn-common in the patch failed. |
   | -1 | findbugs | 21 | hadoop-fs2img in the patch failed. |
   | 0 | findbugs | 19 | hadoop-client-modules/hadoop-client-runtime has no 
data from findbugs |
   | -1 | findbugs | 79 | root in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 542 | root in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 4426 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1432 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs compile 
javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 54606e930db4 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f4f9f0f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/branch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/branch-mvnsite-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/branch-javadoc-root.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 

[GitHub] [hadoop] bgaborg opened a new pull request #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
bgaborg opened a new pull request #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433
 
 
   Change-Id: Ied43ef1522dfc6a1210d6fc58c38d8208824931b
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16567) S3A Secret access to fall back to XML if credential provider raises IOE.

2019-09-12 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16567:
---

 Summary: S3A Secret access to fall back to XML if credential 
provider raises IOE.
 Key: HADOOP-16567
 URL: https://issues.apache.org/jira/browse/HADOOP-16567
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.2
Reporter: Steve Loughran


This is hive related. Hive can put secrets into a JCEKS file which only hive 
may read.

S3A file systems created on behalf of a user do not have access to this file. 
Yet it is listed as the credential provider in the hadoop.credential.providers 
option in core-site -and that is marked as final. When the S3 a initializre() 
method looks up passwords and encryption keys, the failure to open the file 
raises an IOE -and the FS cannot be instantiated.

Proposed: {{S3AUtils.lookupPassword()}} to catch such exceptions, and fall back 
to using {{Configuration.get()}} and so retrieve any property in the XML. If 
there is one failing here, it is if the user did want to read from a credential 
provider, the failure to read the credential will be lost, and the filesystem 
will simply get the default value.

There is a side issue, that permission exceptions can surface as found not 
found exceptions, which are then wrapped as generic IOEs in Configuration. It 
will be hard and brittle to attempt to only respond to permission restrictions. 
We could look at improving {{Configuration.getPassword()}} but that class is so 
widely used, I am not in a rush to break things.

I think this means we have to add another option. Trying to be clever about 
when to fall back versus when to rethrow the exception is doomed.

If this works for S3A, we will need to consider replicating it for ABFS. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
steveloughran commented on issue #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433#issuecomment-530888108
 
 
   tested?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
bgaborg commented on issue #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433#issuecomment-530905783
 
 
   tested against ireland. No new errors, but I'm still getting 
`com.amazonaws.SdkClientException: Unable to find a region via the region 
provider chain. Must provide an explicit region in the builder or setup 
environment to supply a region.` 
   which I created a jira and will fix tomorrow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade 
protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#issuecomment-530930833
 
 
   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/4/console in case 
of problems.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928784#comment-16928784
 ] 

stack commented on HADOOP-13363:


On your #1 and #2 choices above, #1 works for us. Cost is negligible (caveat 
initial setup). The separate repo is forgotten till comes time to spin up new 
release. On #2, the submodule would be hard to 'explain' being in-line w/ 
hadoop checkout and there is too much stuff in hadoop repo as it is.

Would suggest you broaden the scope of #1 so as to include other finicky 
dependencies beyond protobuf that might benefit being hidden from 
downstreamers. Could be done in another issue but suggest be careful you don't 
fence off the possibility (perhaps hadoop-thirdparty rather than 
hadoop-shaded-thirdparty as repo name?).

bq. Release process: can it be issued by the ASF?

Why not? Would suggest it an artifact treated as any other shipped by this PMC. 
You'd generate an RC and vote on it (This is how hbase PMC does it).

bq. There are many javac warnings due to new protobuf-3.6.1 dependency due to 
deprecated APIs usage.

Isn't there a flag to turn these off (IIRC).

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928679#comment-16928679
 ] 

Steve Loughran commented on HADOOP-16566:
-

is there any way for us to tweak checkstyle/findbugs to view use of google 
stopwatch as meriting a warning?

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16566 started by Gabor Bota.
---
> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433#issuecomment-530899366
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1135 | trunk passed |
   | +1 | compile | 39 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 43 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 22 | the patch passed |
   | +1 | mvnsite | 35 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 827 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | the patch passed |
   | +1 | findbugs | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3456 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1433/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1433 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6ba04eb827a9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2ff2a7f |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1433/1/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1433/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
steveloughran commented on issue #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433#issuecomment-530916390
 
 
   +1, with the note that DurationInfo exists to do all this timing and 
printing for you, and is what you should be using in future code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #1433: HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread GitBox
bgaborg merged pull request #1433: HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
URL: https://github.com/apache/hadoop/pull/1433
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13363:
---
Status: Open  (was: Patch Available)

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1432: HADOOP-16557. [pb-upgrade] Upgrade 
protobuf.version to 3.7.1
URL: https://github.com/apache/hadoop/pull/1432#issuecomment-530952381
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1145 | trunk passed |
   | +1 | compile | 1028 | trunk passed |
   | +1 | checkstyle | 145 | trunk passed |
   | +1 | mvnsite | 229 | trunk passed |
   | +1 | shadedclient | 1181 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 243 | trunk passed |
   | 0 | spotbugs | 22 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 21 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   | 0 | findbugs | 22 | branch/hadoop-client-modules/hadoop-client-runtime no 
findbugs output file (findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 15 | hadoop-hdfs in the patch failed. |
   | -1 | mvninstall | 13 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | mvninstall | 13 | hadoop-yarn-common in the patch failed. |
   | -1 | mvninstall | 17 | hadoop-fs2img in the patch failed. |
   | -1 | compile | 40 | root in the patch failed. |
   | -1 | javac | 40 | root in the patch failed. |
   | -0 | checkstyle | 153 | root: The patch generated 2 new + 35 unchanged - 1 
fixed = 37 total (was 36) |
   | -1 | mvnsite | 15 | hadoop-hdfs in the patch failed. |
   | -1 | mvnsite | 16 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | mvnsite | 16 | hadoop-yarn-common in the patch failed. |
   | -1 | mvnsite | 18 | hadoop-fs2img in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 35 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 12 | hadoop-hdfs in the patch failed. |
   | -1 | javadoc | 13 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | javadoc | 12 | hadoop-yarn-common in the patch failed. |
   | 0 | findbugs | 11 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 12 | hadoop-hdfs in the patch failed. |
   | -1 | findbugs | 12 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | findbugs | 11 | hadoop-yarn-common in the patch failed. |
   | -1 | findbugs | 15 | hadoop-fs2img in the patch failed. |
   | 0 | findbugs | 12 | hadoop-client-modules/hadoop-client-runtime has no 
data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 11 | hadoop-project in the patch passed. |
   | -1 | unit | 12 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 11 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | unit | 12 | hadoop-yarn-common in the patch failed. |
   | -1 | unit | 14 | hadoop-fs2img in the patch failed. |
   | +1 | unit | 12 | hadoop-client-runtime in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 5174 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1432 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux dfe9c3cb1073 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1505d3f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/2/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1432/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-fs2img.txt
 |
   | compile | 

[GitHub] [hadoop] nandakumar131 closed pull request #1410: HDDS-2076. Read fails because the block cannot be located in the container

2019-09-12 Thread GitBox
nandakumar131 closed pull request #1410: HDDS-2076. Read fails because the 
block cannot be located in the container
URL: https://github.com/apache/hadoop/pull/1410
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16568) S3A FullCredentialsTokenBinding fails if local credentials are unset

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928715#comment-16928715
 ] 

Steve Loughran commented on HADOOP-16568:
-

{code}
2019-09-12 17:31:39,678 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(815)) - UGI loginUser:stevel 
(auth:SIMPLE)
2019-09-12 17:31:39,925 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceInit(185)) - Filesystem 
s3a://hwdev-steve-ireland-new is using delegation tokens of kind 
S3ADelegationToken/Full2019-09-12 17:31:40,092 [main] INFO  
service.AbstractService (AbstractService.java:noteFailure(267)) - Service 
FullCredentials/001 failed in state 
STARTEDorg.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.fs.s3a.auth.MarshalledCredentials.validate(MarshalledCredentials.java:336)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.loadAWSCredentials(FullCredentialsTokenBinding.java:105)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.serviceStart(FullCredentialsTokenBinding.java:75)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.serviceStart(S3ADelegationTokens.java:198)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3387)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:502)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
2019-09-12 17:31:40,094 [main] INFO  service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service S3ADelegationTokens failed in 
state STARTEDorg.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.serviceStart(S3ADelegationTokens.java:198)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3387)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:502)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
Caused by: org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: 
no credentials 

[jira] [Resolved] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16566.
-
Resolution: Fixed

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928795#comment-16928795
 ] 

stack commented on HADOOP-13363:


One thought, just use the hbase-thirdparty jar? Shaded protobuf, netty, gson, 
and a few others. 

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek opened a new pull request #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread GitBox
elek opened a new pull request #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434
 
 
   We have two kind of ozone file system jars: current and legacy. current is 
designed to work only with exactly the same hadoop version which is used for 
compilation (3.2 as of now).
   
   But as of now the hadoop classes are included in the current jar which is 
not necessary as the jar is expected to be used in an environment where  the 
hadoop classes (exactly the same hadoop classes) are already there. They can be 
excluded.
   
   See: https://issues.apache.org/jira/browse/HDDS-2120


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1434: HDDS-2120. Remove hadoop classes from ozonefs-current jar

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1434: HDDS-2120. Remove hadoop classes from 
ozonefs-current jar
URL: https://github.com/apache/hadoop/pull/1434#issuecomment-530956782
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-ozone in trunk failed. |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1043 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 16 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 14 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-ozone in the patch failed. |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 13 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 14 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 134 | hadoop-hdds in the patch failed. |
   | -1 | unit | 25 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2391 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1434 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 6db3fe47a36e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1505d3f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozonefs-lib-current U: 
hadoop-ozone/ozonefs-lib-current |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1434/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[jira] [Commented] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928693#comment-16928693
 ] 

Gabor Bota commented on HADOOP-16566:
-

maybe, but it would be better to update guava everywhere.

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16423) S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)

2019-09-12 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16423.
-
Resolution: Fixed

> S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)
> 
>
> Key: HADOOP-16423
> URL: https://issues.apache.org/jira/browse/HADOOP-16423
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> This part is only for logging the inconsistencies.
> This issue only covers the part when the walk is being done in the S3 and 
> compares all metadata to the MS.
> There will be no part where the walk is being done in the MS and compare it 
> to the S3. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928706#comment-16928706
 ] 

Steve Loughran commented on HADOOP-16565:
-

looks like I deliberately choose london for STS'' 
   sts.london.endpoint
   sts.eu-west-2.amazonaws.com
 
 
   sts.london.region
   eu-west-2
 

 
   sts.ireland.endpoint
   sts.eu-west-1.amazonaws.com
 

 
   fs.s3a.assumed.role.sts.endpoint
   ${sts.london.endpoint}
 
 
   fs.s3a.assumed.role.sts.endpoint.region
   ${sts.london.region}
 
 

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928722#comment-16928722
 ] 

Steve Loughran commented on HADOOP-16547:
-

latest patch will seem to do the load, but need to compare with the unpatched 
version to verify this is a fix. Note the trace also relies on HADOOP-16568 for 
the full DT to work; my next test run will use session creds instead

{code}
bin/hadoop s3guard prune -seconds 0 -tombstone s3a://hwdev-steve-ireland-new/
2019-09-12 17:47:01,207 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:login(260)) - hadoop login
2019-09-12 17:47:01,210 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(193)) - hadoop login commit
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(221)) - using local user:UnixPrincipal: stevel
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(227)) - Using user: "UnixPrincipal: stevel" 
with name stevel
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(241)) - User entry: "stevel"
2019-09-12 17:47:01,217 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(768)) - Reading credentials from 
location /Users/stevel/Projects/Releases/secrets.bin
2019-09-12 17:47:01,269 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(773)) - Loaded 1 tokens from 
/Users/stevel/Projects/Releases/secrets.bin
2019-09-12 17:47:01,269 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(815)) - UGI loginUser:stevel 
(auth:SIMPLE)
2019-09-12 17:47:01,761 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceInit(185)) - Filesystem 
s3a://hwdev-steve-ireland-new is using delegation tokens of kind 
S3ADelegationToken/Full
2019-09-12 17:47:02,018 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:lookupToken(606)) - Looking for token for service 
s3a://hwdev-steve-ireland-new in credentials
2019-09-12 17:47:02,021 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:lookupToken(610)) - Found token of kind 
S3ADelegationToken/Full
2019-09-12 17:47:02,057 [main] INFO  delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:bindToDelegationToken(327)) - Using delegation token 
S3ATokenIdentifier{S3ADelegationToken/Full; uri=s3a://hwdev-steve-ireland-new; 
timestamp=1568301567114; encryption=(no encryption); 
50f24529-7fa3-4099-b776-f00c9a83ad96; Created on HW13176-2.local/192.168.1.139 
at time 2019-09-12T15:19:25.280Z.; source = Hadoop configuration data}; full 
credentials (valid)
2019-09-12 17:47:02,057 [main] INFO  delegation.S3ADelegationTokens 
(DurationInfo.java:(72)) - Starting: Creating Delegation Token
2019-09-12 17:47:02,059 [main] INFO  delegation.S3ADelegationTokens 
(DurationInfo.java:close(87)) - Creating Delegation Token: duration 0:00.002s
2019-09-12 17:47:02,059 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStart(200)) - S3A Delegation support token 
S3ATokenIdentifier{S3ADelegationToken/Full; uri=s3a://hwdev-steve-ireland-new; 
timestamp=1568301567114; encryption=(no encryption); 
50f24529-7fa3-4099-b776-f00c9a83ad96; Created on HW13176-2.local/192.168.1.139 
at time 2019-09-12T15:19:25.280Z.; source = Hadoop configuration data}; full 
credentials (valid) with Token binding S3ADelegationToken/Full
2019-09-12 17:47:03,520 [main] INFO  s3guard.S3GuardTool 
(S3GuardTool.java:initMetadataStore(323)) - Metadata store 
DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new, 
tableArn=arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new} 
is initialized.
2019-09-12 17:47:03,540 [main] INFO  s3guard.DynamoDBMetadataStore 
(DurationInfo.java:(72)) - Starting: Pruning DynamoDB Store
2019-09-12 17:47:03,574 [main] INFO  s3guard.DynamoDBMetadataStore 
(DurationInfo.java:close(87)) - Pruning DynamoDB Store: duration 0:00.034s
2019-09-12 17:47:03,575 [main] INFO  s3guard.DynamoDBMetadataStore 
(DynamoDBMetadataStore.java:innerPrune(1605)) - Finished pruning 0 items in 
batches of 25
2019-09-12 17:47:03,580 [shutdown-hook-0] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(221)) - Stopping delegation tokens

{code}

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth 

[jira] [Commented] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928734#comment-16928734
 ] 

Gabor Bota commented on HADOOP-16566:
-

+1 by [~ste...@apache.org] on  #1433 PR.
Committing.

> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch

2019-09-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928735#comment-16928735
 ] 

Hudson commented on HADOOP-16566:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17288 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17288/])
HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead 
(github: rev 1505d3f5ff725f5a2dcd775b52e7f962e6f3308e)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsck.java


> S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of 
> com.google.common.base.Stopwatch
> --
>
> Key: HADOOP-16566
> URL: https://issues.apache.org/jira/browse/HADOOP-16566
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Some distributions won't have the updated guava, and 
> {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. 
> Fix this issue by using the hadoop util's instead.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1160: HADOOP-16458 LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3

2019-09-12 Thread GitBox
hadoop-yetus commented on issue #1160: HADOOP-16458 
LocatedFileStatusFetcher.getFileStatuses failing intermittently with s3
URL: https://github.com/apache/hadoop/pull/1160#issuecomment-530923330
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1019 | trunk passed |
   | +1 | compile | 981 | trunk passed |
   | +1 | checkstyle | 147 | trunk passed |
   | +1 | mvnsite | 190 | trunk passed |
   | +1 | shadedclient | 1074 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 267 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 114 | the patch passed |
   | +1 | compile | 926 | the patch passed |
   | +1 | javac | 926 | the patch passed |
   | -0 | checkstyle | 147 | root: The patch generated 1 new + 229 unchanged - 
9 fixed = 230 total (was 238) |
   | +1 | mvnsite | 184 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | the patch passed |
   | +1 | findbugs | 290 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 541 | hadoop-common in the patch passed. |
   | +1 | unit | 335 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 82 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7420 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1160 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 17bd2e49da74 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2ff2a7f |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/17/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/17/testReport/ |
   | Max. process+thread count | 1370 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1160/17/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928767#comment-16928767
 ] 

Steve Loughran commented on HADOOP-16547:
-

to recreate the problem
you also need to set the bucket name or it doesn't init
{code}
~/P/R/fsck bin/hadoop s3guard prune -seconds 0 -tombstone 
s3a://hwdev-steve-ireland-new/
java.lang.IllegalArgumentException: No DynamoDB table name configured
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:497)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1072)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:402)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1767)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1776)
2019-09-12 18:26:53,786 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) 
{code}

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16565:

Component/s: fs/s3

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928709#comment-16928709
 ] 

Steve Loughran commented on HADOOP-16565:
-

Once you have identified the cause, add a new entry in the troubleshooting file 
for others

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16565:

Affects Version/s: 3.3.0

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16568) S3A FullCredentialsTokenBinding fails if local credentials are unset

2019-09-12 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16568:
---

 Summary: S3A FullCredentialsTokenBinding fails if local 
credentials are unset
 Key: HADOOP-16568
 URL: https://issues.apache.org/jira/browse/HADOOP-16568
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Not sure how this slipped by the automated tests, but it is happening on my CLI.

# FullCredentialsTokenBinding fails on startup if there are now AWS keys in the 
auth chain
# because it tries to load them in serviceStart, not deployUnbonded



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928726#comment-16928726
 ] 

Steve Loughran commented on HADOOP-16547:
-

actually, there's a simpler way to verify the move; look at the stack trace 
when uninited and verify it now happens if FS instantiation, rather than 
metastore init

{code}
2019-09-12 17:50:20,820 [main] INFO  service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service S3ADelegationTokens failed in 
state STARTED
org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.fs.s3a.auth.MarshalledCredentials.validate(MarshalledCredentials.java:336)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.loadAWSCredentials(FullCredentialsTokenBinding.java:106)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.deployUnbonded(FullCredentialsTokenBinding.java:119)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.deployUnbonded(S3ADelegationTokens.java:245)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.bindToAnyDelegationToken(S3ADelegationTokens.java:278)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.serviceStart(S3ADelegationTokens.java:199)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3393)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:555)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:360)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.maybeInitFilesystem(S3GuardTool.java:381)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1098)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:425)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1700)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1709)
2019-09-12 17:50:20,822 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(221)) - Stopping delegation tokens
org.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3393)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:555)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:360)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.maybeInitFilesystem(S3GuardTool.java:381)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1098)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:425)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1700)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1709)
Caused by: org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: 
no credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.fs.s3a.auth.MarshalledCredentials.validate(MarshalledCredentials.java:336)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.loadAWSCredentials(FullCredentialsTokenBinding.java:106)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.deployUnbonded(FullCredentialsTokenBinding.java:119)
at 

[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928746#comment-16928746
 ] 

Vinayakumar B commented on HADOOP-13363:


I have created subtasks to proceed step-by-step.

Kept subtasks related to changes of other components also here itself. If 
required we can move to corresponding projects later.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-09-12 Thread Allen Wittenauer (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928778#comment-16928778
 ] 

Allen Wittenauer commented on HADOOP-13363:
---

Since I'm the originator of this JIRA issue, I can't get away from it. :( So 
while it's flooding my inbox, I thought I'd mention that upgrading the Apache 
Yetus tooling gives support for Uber's prototool.  It provides protofile 
linting capabilities which would likely go a long way towards helping to get 
the proto files sane/3.x compliant.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1435: HDDS-2119. Use checkstyle.xml and suppressions.xml in hdds/ozone projects for checkstyle validation.

2019-09-12 Thread GitBox
nandakumar131 opened a new pull request #1435: HDDS-2119. Use checkstyle.xml 
and suppressions.xml in hdds/ozone projects for checkstyle validation.
URL: https://github.com/apache/hadoop/pull/1435
 
 
   After #1423 hdds/ozone no more relies on hadoop parent pom, so we have to 
use separate checkstyle.xml and suppressions.xml in hdds/ozone projects for 
checkstyle validation.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >