[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782174#comment-16782174
 ] 

Hadoop QA commented on HADOOP-16132:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960835/HADOOP-16132.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e11568e5618a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a15f7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16009/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16009/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16009/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[GitHub] hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468822465
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1212 | trunk passed |
   | +1 | compile | 89 | trunk passed |
   | +1 | checkstyle | 35 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 802 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 100 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 66 | the patch passed |
   | -1 | jshint | 83 | The patch generated 294 new + 1942 unchanged - 1053 
fixed = 2236 total (was 2995) |
   | +1 | compile | 69 | the patch passed |
   | +1 | javac | 69 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 54 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 759 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 110 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 87 | common in the patch failed. |
   | +1 | unit | 31 | framework in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3867 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 7544400e110b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8b72aea |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/diff-patch-jshint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: HADOOP-16132.005.patch
Status: Patch Available  (was: Open)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, HADOOP-16132.004.patch, HADOOP-16132.005.patch, 
> seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Status: Open  (was: Patch Available)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, HADOOP-16132.004.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782089#comment-16782089
 ] 

Hadoop QA commented on HADOOP-16132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16132 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960824/HADOOP-16132.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cf3deec70992 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16008/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16008/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16008/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782083#comment-16782083
 ] 

Wei-Chiu Chuang commented on HADOOP-16152:
--

Initially I thought maybe we missed jetty-server in the shaded jar. But I did 
see SessionHandler class in the shaded jar:
{noformat}
$ jar tf 
hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.3.0-SNAPSHOT.jar
 |grep "jetty\/server\/session"

...

org/apache/hadoop/shaded/org/eclipse/jetty/server/session/SessionHandler.class
{noformat}
 So if the Jetty 9.3 jar is shaded into hadoop-client-minicluster jar, I don't 
see how this can happen. Did Hadoop 3.1 fail to ship shaded Hadoop jars 
(shading could be disabled optionally)? Does Spark link hadoop-common instead 
of hadoop-client-minicluster jars?

 

For information, we maintain a list of downstream consumable artifacts at 
[https://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/DownstreamDev.html#Build_Artifacts]
 * hadoop-client
 * hadoop-client-api
 * hadoop-client-minicluster
 * hadoop-client-runtime
 * hadoop-hdfs-client
 * hadoop-hdfs-native-client
 * hadoop-mapreduce-client-app
 * hadoop-mapreduce-client-common
 * hadoop-mapreduce-client-core
 * hadoop-mapreduce-client-jobclient
 * hadoop-mapreduce-client-nativetask
 * hadoop-yarn-client

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof in InnerNodeImpl

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782080#comment-16782080
 ] 

Hadoop QA commented on HADOOP-16156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 18 unchanged - 1 fixed = 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16156 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960817/HADOOP-16156.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 33520b9bd8fd 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16006/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16006/testReport/ |
| Max. process+thread count | 1380 (vs. ulimit of 1) |
| 

[jira] [Commented] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782061#comment-16782061
 ] 

BELUGA BEHR commented on HADOOP-16148:
--

Hey [~ste...@apache.org]

Since I did the original work and you took the time to look at it, thought I 
might as well see it through.  All green now. :)

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782056#comment-16782056
 ] 

Hadoop QA commented on HADOOP-16148:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 1 unchanged - 9 fixed = 1 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960816/HADOOP-16148.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b85a2dd9b4e0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16005/testReport/ |
| Max. process+thread count | 1416 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16005/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Cleanup LineReader Unit Test
> 
>
> 

[jira] [Commented] (HADOOP-16157) [Clean-up] Remove NULL check before instanceof in AzureNativeFileSystemStore

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782053#comment-16782053
 ] 

Hadoop QA commented on HADOOP-16157:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16157 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960821/HADOOP-16157.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ac4cea54e3a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16007/testReport/ |
| Max. process+thread count | 457 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16007/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [Clean-up] Remove NULL check before instanceof in 

[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Attachment: HADOOP-16132.004.patch
Status: Patch Available  (was: Open)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, HADOOP-16132.004.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16132) Support multipart download in S3AFileSystem

2019-03-01 Thread Justin Uang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Uang updated HADOOP-16132:
-
Status: Open  (was: Patch Available)

> Support multipart download in S3AFileSystem
> ---
>
> Key: HADOOP-16132
> URL: https://issues.apache.org/jira/browse/HADOOP-16132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Justin Uang
>Priority: Major
> Attachments: HADOOP-16132.001.patch, HADOOP-16132.002.patch, 
> HADOOP-16132.003.patch, seek-logs-parquet.txt
>
>
> I noticed that I get 150MB/s when I use the AWS CLI
> {code:java}
> aws s3 cp s3:/// - > /dev/null{code}
> vs 50MB/s when I use the S3AFileSystem
> {code:java}
> hadoop fs -cat s3:/// > /dev/null{code}
> Looking into the AWS CLI code, it looks like the 
> [download|https://github.com/boto/s3transfer/blob/ca0b708ea8a6a1213c6e21ca5a856e184f824334/s3transfer/download.py]
>  logic is quite clever. It downloads the next couple parts in parallel using 
> range requests, and then buffers them in memory in order to reorder them and 
> expose a single contiguous stream. I translated the logic to Java and 
> modified the S3AFileSystem to do similar things, and am able to achieve 
> 150MB/s download speeds as well. It is mostly done but I have some things to 
> clean up first. The PR is here: 
> https://github.com/palantir/hadoop/pull/47/files
> It would be great to get some other eyes on it to see what we need to do to 
> get it merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #543: HDDS-1211. Test SCMChillMode failing 
randomly in Jenkins run
URL: https://github.com/apache/hadoop/pull/543#issuecomment-468796851
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1000 | trunk passed |
   | -1 | compile | 52 | integration-test in trunk failed. |
   | +1 | checkstyle | 21 | trunk passed |
   | -1 | mvnsite | 26 | integration-test in trunk failed. |
   | +1 | shadedclient | 666 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 16 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 21 | integration-test in the patch failed. |
   | -1 | compile | 21 | integration-test in the patch failed. |
   | -1 | javac | 21 | integration-test in the patch failed. |
   | -0 | checkstyle | 15 | hadoop-ozone/integration-test: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 14 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | integration-test in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 2747 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/543 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 64ea92566487 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cab8529 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/branch-compile-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/diff-checkstyle-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16157) [Clean-up] Remove NULL check before instanceof in AzureNativeFileSystemStore

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16157:

Attachment: HADOOP-16157.001.patch
Status: Patch Available  (was: Open)

Removed occurrences of NULL check before instanceof operation.

> [Clean-up] Remove NULL check before instanceof in AzureNativeFileSystemStore
> 
>
> Key: HADOOP-16157
> URL: https://issues.apache.org/jira/browse/HADOOP-16157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16157.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #542: HDDS-1204. Fix ClassNotFound issue with javax.xml.bind.DatatypeConver…

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #542: HDDS-1204. Fix ClassNotFound issue with 
javax.xml.bind.DatatypeConver…
URL: https://github.com/apache/hadoop/pull/542#issuecomment-468787402
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 975 | trunk passed |
   | +1 | compile | 41 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 692 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 68 | trunk passed |
   | +1 | javadoc | 37 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 73 | the patch passed |
   | +1 | javadoc | 32 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 61 | common in the patch failed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3011 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/542 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9019030d0e84 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de1dae6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16157) [Clean-up] Remove NULL check before instanceof in AzureNativeFileSystemStore

2019-03-01 Thread Shweta (JIRA)
Shweta created HADOOP-16157:
---

 Summary: [Clean-up] Remove NULL check before instanceof in 
AzureNativeFileSystemStore
 Key: HADOOP-16157
 URL: https://issues.apache.org/jira/browse/HADOOP-16157
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Shweta
Assignee: Shweta






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof in InnerNodeImpl

2019-03-01 Thread Shweta (JIRA)
Shweta created HADOOP-16156:
---

 Summary: [Clean-up] Remove NULL check before instanceof in 
InnerNodeImpl
 Key: HADOOP-16156
 URL: https://issues.apache.org/jira/browse/HADOOP-16156
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Shweta
Assignee: Shweta






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof in InnerNodeImpl

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: HADOOP-16156.001.patch
Status: Patch Available  (was: Open)

> [Clean-up] Remove NULL check before instanceof in InnerNodeImpl
> ---
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-16148:
-
Attachment: HADOOP-16148.2.patch

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 opened a new pull request #543: HDDS-1211. Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread GitBox
bharatviswa504 opened a new pull request #543: HDDS-1211. Test SCMChillMode 
failing randomly in Jenkins run
URL: https://github.com/apache/hadoop/pull/543
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-16148:
-
Status: Patch Available  (was: Open)

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-16148:
-
Status: Open  (was: Patch Available)

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] xiaoyuyao commented on a change in pull request #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-03-01 Thread GitBox
xiaoyuyao commented on a change in pull request #526: HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#discussion_r261722027
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
 ##
 @@ -669,6 +676,12 @@ public Path getWorkingDirectory() {
 return workingDir;
   }
 
+  @Override
+  public Token getDelegationToken(String renewer) throws IOException {
+return securityEnabled? adapter.getDelegationToken(renewer) :
+super.getDelegationToken(renewer);
 
 Review comment:
   bq. "should fetch DT from both", that is handled by addDelegationTokens() in 
the TokenIssuer.
   Here we follow the FileSystem contract to return om delegation token if 
ozone security is enabled otherwise null as speced in 
FileSystem#getDelegationToken().


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-03-01 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781980#comment-16781980
 ] 

Jonathan Eagles commented on HADOOP-16119:
--

I wonder about supporting 2 endpoints to allow installations with existing KMS 
over jetty to migrate to KMS over RPC with no downtime! [~daryn], could you 
comment on the design approach?

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: Design doc_ KMS v2.pdf
>
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781977#comment-16781977
 ] 

Steve Loughran commented on HADOOP-16148:
-

looks good, but checkstyle is unhappy, especially about the javadocs -I worry 
more about breaking them than anything else

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HADOOP-16148.1.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-03-01 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781975#comment-16781975
 ] 

Shweta commented on HADOOP-13656:
-

Yes [~sodonnell], I am working on this issue but got stalled due to something 
else. Yes, I would agree that Delete needs the change in terms of having a 
filesystem added. Will post a patch soon.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] xiaoyuyao opened a new pull request #542: HDDS-1204. Fix ClassNotFound issue with javax.xml.bind.DatatypeConver…

2019-03-01 Thread GitBox
xiaoyuyao opened a new pull request #542: HDDS-1204. Fix ClassNotFound issue 
with javax.xml.bind.DatatypeConver…
URL: https://github.com/apache/hadoop/pull/542
 
 
   …ter used by DefaultProfile. Contributed by Xiaoyu Yao.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781973#comment-16781973
 ] 

Steve Loughran commented on HADOOP-16152:
-

Is that the stuff which actually runs in the YARN NMs? As no, that bit isn't 
isolated.

If other projects have gone up it makes sense, though there will be a price: 
once yarn moves up, you aren't going to be able to run older versions of the 
spark shuffle

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781960#comment-16781960
 ] 

Hadoop QA commented on HADOOP-16140:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960802/HADOOP-14200.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 39145c30e8df 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dcaca19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16004/testReport/ |
| Max. process+thread count | 1402 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16004/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
>

[GitHub] ajayydv opened a new pull request #541: HDDS-134. SCM CA: OM sends CSR and uses certificate issued by SCM. Co…

2019-03-01 Thread GitBox
ajayydv opened a new pull request #541: HDDS-134. SCM CA: OM sends CSR and uses 
certificate issued by SCM. Co…
URL: https://github.com/apache/hadoop/pull/541
 
 
   …ntributed by Ajay Kumar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16139) NPE in ABFS Client Credential Auth

2019-03-01 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16139.
-
Resolution: Duplicate

Fixed in HADOOP-16068

> NPE in ABFS Client Credential Auth
> --
>
> Key: HADOOP-16139
> URL: https://issues.apache.org/jira/browse/HADOOP-16139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> While trying to get ABFS & OAuth client credentials work, I got an NPE instead



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15582) Document ABFS

2019-03-01 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15582.
-
Resolution: Fixed

Fixed in HADOOP-16068

> Document ABFS
> -
>
> Key: HADOOP-15582
> URL: https://issues.apache.org/jira/browse/HADOOP-15582
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Thomas Marquardt
>Priority: Major
>
> Add documentation for abfs under 
> {{hadoop-tools/hadoop-azure/src/site/markdown}}
> Possible topics include
> * intro to scheme
> * why abfs (link to MSDN, etc)
> * config options
> * switching from wasb/interop
> * troubleshooting
> testing.md should add a section on testing this stuff too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781939#comment-16781939
 ] 

Steve Loughran commented on HADOOP-13075:
-

bq. is SSE-KMS supported or experimental?

works really well; we have the tests to prove it

# if the caller doesn't have perms to decrypt a file, they get a 401 unauthed 
error, which can cause confusion if they have read access to the store itself
# when you rename() the copy will decrypt the files and encrypt with the 
current key
# I've never seen any good numbers on the impact in cost or performance of 
SSE-KMS, especially on random IO.
# the role delegation tokens in HADOOP-14556 add rule kms encrypt/decrypt to 
their permissions for the generated tokens, so are compatible with SSE-KMS too

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-03-01 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781915#comment-16781915
 ] 

Ben Roling commented on HADOOP-15625:
-

On another tangent, something I noticed is that it doesn't look like [S3 
Select|https://aws.amazon.com/blogs/aws/s3-glacier-select/] supports ETag or 
versionId qualification (either client or server side).  That doesn't seem 
directly applicable to this particular JIRA since SelectInputStream already 
doesn't support seek() backwards or any other reason for re-open, but it would 
mean something for HADOOP-16085.  S3Guard will not be able to avoid 
read-after-overwrite inconsistency with S3 Select.  I think that will just need 
to be a highlighted caveat in the documentation.  It's possible that eventually 
S3 Select will support such qualifications and the implementation could be 
updated at that point.

Steve - do you have any comments as to how the patch looks now otherwise?  Are 
there any items I should/could still be following up on before this can be 
merged?

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15883) Fix WebHdfsFileSystemContract test

2019-03-01 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15883:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Fix WebHdfsFileSystemContract test
> --
>
> Key: HADOOP-15883
> URL: https://issues.apache.org/jira/browse/HADOOP-15883
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15883.001.patch
>
>
> HADOOP-15864 fix bug about Job/Task execute failure when server (NameNode, 
> KMS, Timeline) domain name can not resolve. meanwhile it change semantic of 
> http status code about webhdfsfilesystem, this ticket will trace to fix 
> TestWebHdfsFileSystemContract#testResponseCode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2019-03-01 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781888#comment-16781888
 ] 

He Xiaoqiao commented on HADOOP-15864:
--

upload new patch  [^HADOOP-15864.004.patch] and I try to fix this issue with 
new configuration in order to avoid this fix affect other unit test.

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.004.patch, HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at 

[GitHub] hadoop-yetus commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #523: HDDS-623. On SCM UI, Node Manager info is 
empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-468739775
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 396 | root in trunk failed. |
   | -1 | compile | 22 | server-scm in trunk failed. |
   | +1 | mvnsite | 44 | trunk passed |
   | +1 | shadedclient | 1097 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 749 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 152 | server-scm in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2306 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/523 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  |
   | uname | Linux a7cf3f31e092 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dcaca19 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/2/artifact/out/branch-compile-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/2/testReport/ |
   | Max. process+thread count | 576 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved

2019-03-01 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15864:
-
Attachment: HADOOP-15864.004.patch

> Job submitter / executor fail when SBN domain name can not resolved
> ---
>
> Key: HADOOP-15864
> URL: https://issues.apache.org/jira/browse/HADOOP-15864
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15864-branch.2.7.001.patch, 
> HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, 
> HADOOP-15864.004.patch, HADOOP-15864.branch.2.7.004.patch
>
>
> Job submit failure and Task executes failure if Standby NameNode domain name 
> can not resolved on HDFS HA with DelegationToken feature.
> This issue is triggered when create {{ConfiguredFailoverProxyProvider}} 
> instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode 
> with Security. Since in HDFS HA mode UGI need include separate token for each 
> NameNode in order to dealing with Active-Standby switch, the double tokens' 
> content is same of course. 
> However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} 
> it checks whether the address of NameNode has been resolved or not, if Not, 
> throw #IllegalArgumentException upon, then job submitter/ task executor fail.
> HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets 
> resolve completely.
> Another questions many guys consider is why NameNode domain name can not 
> resolve? I think there are many scenarios, for instance node replace when 
> meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure 
> should not impact Hadoop cluster stability in my opinion.
> a. code ref: org.apache.hadoop.security.SecurityUtil line373-386
> {code:java}
>   public static Text buildTokenService(InetSocketAddress addr) {
> String host = null;
> if (useIpForTokenService) {
>   if (addr.isUnresolved()) { // host has no ip address
> throw new IllegalArgumentException(
> new UnknownHostException(addr.getHostName())
> );
>   }
>   host = addr.getAddress().getHostAddress();
> } else {
>   host = StringUtils.toLowerCase(addr.getHostName());
> }
> return new Text(host + ":" + addr.getPort());
>   }
> {code}
> b.exception log ref:
> {code:xml}
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't create proxy provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303)
> at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176)
> at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665)
> ... 35 more
> Caused by: java.lang.reflect.InvocationTargetException
> at 

[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-03-01 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781859#comment-16781859
 ] 

Stephen O'Donnell commented on HADOOP-13656:


[~shwetayakkali] Are you planning to work on this issue? It was raised in 
HADOOP-16140 where I was adding a new option to the expunge command.

I think we can add the ability to pass in a file system with some changes only 
in the Delete.Expunge class as the trash already has the ability to be 
instantiated with a different filesystem than the default.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-03-01 Thread GitBox
elek commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-468726587
 
 
   Yup. Nice catch, thank you. I removed that line, too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for 
Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468725702
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/502 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/502 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-502/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-01 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781849#comment-16781849
 ] 

Stephen O'Donnell edited comment on HADOOP-16140 at 3/1/19 4:30 PM:


Uploaded one more patch - this contains a note in the docs about the new option 
and hopefully fixes the remaining checkstyle issues.

I think adding the ability to pass a filesystem to the expunge command is quite 
easy as the Trash itself has the ability to accept a non-default FS when it is 
created. The tricky part will be adding a test for it I think. I will add a 
comment to HADOOP-13656 and see if the current assignee wants to work on it.


was (Author: sodonnell):
Uploaded one more patch - this contains a note in the docs about the new 
options and hopefully fixes the remaining checkstyle issues.

I think adding the ability to pass a filesystem to the expunge command is quite 
easy as the Trash itself has the ability to accept a non-default FS when it is 
created. The tricky part will be adding a test for it I think. I will add a 
comment to HADOOP-13656 and see if the current assignee wants to work on it.

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-03-01 Thread GitBox
elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone 
datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468725061
 
 
   Shame on me. I created the .keep file locally but it's ignored by the 
.gitignore file. I had it locally but it was not pushed. Now it's also fixed. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-01 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781849#comment-16781849
 ] 

Stephen O'Donnell commented on HADOOP-16140:


Uploaded one more patch - this contains a note in the docs about the new 
options and hopefully fixes the remaining checkstyle issues.

I think adding the ability to pass a filesystem to the expunge command is quite 
easy as the Trash itself has the ability to accept a non-default FS when it is 
created. The tricky part will be adding a test for it I think. I will add a 
comment to HADOOP-13656 and see if the current assignee wants to work on it.

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-01 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-16140:
---
Attachment: HADOOP-14200.004.patch

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions

2019-03-01 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15781:

Description: 
This is caused by HADOOP-15642 but I'd missed it because I'd been playing with 
assumed roles locally (restricting their rights) and mistook the failures for 
"steve's misconfigured the test role", not "the SDK

Some of the failures are actually due to changed messages from the AWS 
endpoints, so are independent of the SDK —and mean that existing builds will 
fail with false positives. The branch-3.1 patch fixes those tests only and 
needs to be applied to any branch with the ITestAssumeRole tests

  was:


This is caused by HADOOP-15642 but I'd missed it because I'd been playing with 
assumed roles locally (restricting their rights) and mistook the failures for 
"steve's misconfigured the test role", not "the SDK


> S3A assumed role tests failing due to changed error text in AWS exceptions
> --
>
> Key: HADOOP-15781
> URL: https://issues.apache.org/jira/browse/HADOOP-15781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.0, 3.2.0
> Environment: some of the fault-catching tests in {{ITestAssumeRole}} 
> are failing as the SDK update of HADOOP-15642 changed the text. Fix the 
> tests, perhaps by removing the text check entirely 
> —it's clearly too brittle
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch
>
>
> This is caused by HADOOP-15642 but I'd missed it because I'd been playing 
> with assumed roles locally (restricting their rights) and mistook the 
> failures for "steve's misconfigured the test role", not "the SDK
> Some of the failures are actually due to changed messages from the AWS 
> endpoints, so are independent of the SDK —and mean that existing builds will 
> fail with false positives. The branch-3.1 patch fixes those tests only and 
> needs to be applied to any branch with the ITestAssumeRole tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-03-01 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781824#comment-16781824
 ] 

Ben Roling commented on HADOOP-15625:
-

bq. BTW, looking @ amazon snowball docs. They only serve up etags from files 
uploaded with MPU

Interesting.  Sounds like something to add to documentation with regard to 
using S3AFileSystem over top of a snowball?  I guess it looks like 
S3AFileSystem over top of snowball doesn't really work at the moment anyway per 
HADOOP-14710.

Another question this raises is whether or not data imported into S3 from 
snowball can make its way into the real S3 without an eTag.  I would hope/guess 
that an eTag is generated on the transfer from the snowball into the real S3, 
but I don't have any direct evidence of that.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-01 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781817#comment-16781817
 ] 

Gabor Bota commented on HADOOP-15999:
-

Sure, here are the stacktraces:  
https://gist.github.com/bgaborg/4378fd13cf9ee8dab9475274a6dd251d

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-01 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r261641379
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,17 +18,51 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
 
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{Constants.INPUT_FADV_RANDOM},
+{Constants.INPUT_FADV_NORMAL},
+{Constants.INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+  
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-01 Thread GitBox
hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-468699388
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1021 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 672 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 26 | hadoop-aws in the patch failed. |
   | -1 | compile | 25 | hadoop-aws in the patch failed. |
   | -1 | javac | 25 | hadoop-aws in the patch failed. |
   | -0 | checkstyle | 16 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) |
   | -1 | mvnsite | 26 | hadoop-aws in the patch failed. |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 654 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 20 | hadoop-aws in the patch failed. |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 21 | The patch does not generate ASF License warnings. |
   | | | 2763 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/539 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b3467d524f1e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / dcaca19 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-03-01 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781733#comment-16781733
 ] 

Adam Antal commented on HADOOP-16124:
-

I verified that the link is correct (core-site.xml).

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-03-01 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781730#comment-16781730
 ] 

Adam Antal commented on HADOOP-16124:
-

Created PR so that the md files can be checked 
([https://github.com/apache/hadoop/pull/540]).

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] adamantal opened a new pull request #540: HADOOP-16124. Extend documentation in testing.md about endpoint constants

2019-03-01 Thread GitBox
adamantal opened a new pull request #540: HADOOP-16124. Extend documentation in 
testing.md about endpoint constants
URL: https://github.com/apache/hadoop/pull/540
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-01 Thread GitBox
steveloughran opened a new pull request #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539
 
 
   HADOOP-16109. Parquet reading S3AFileSystem causes EOF
   
   Nobody gets seek right. No matter how many times they think they have.
   
   Reproducible test from: Dave Christianson
   Fixed seek() logic: Steve Loughran


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran commented on issue #535: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-01 Thread GitBox
steveloughran commented on issue #535: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/535#issuecomment-468681801
 
 
   I think I've been doing stupid things with sourcetree pushing stuff back up 
to apache. I used to have the source tree I did my coding on read-only, to stop 
me ever accidentally pushing up my private stuff and now there's confusion. 
Fix: edit the git remote list to not have a valid push url for apache, resubmit 
this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781712#comment-16781712
 ] 

Steve Loughran commented on HADOOP-15999:
-

other tests run parametrized fine; worked for me in intellij. So there is a 
problem here it's not related to parameterization, more in how JUnit runs under 
the IDE are different from those of external runner. Can you stick the stack 
traces up.

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14710) Uber-JIRA: Support AWS Snowball

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781709#comment-16781709
 ] 

Steve Loughran commented on HADOOP-14710:
-

[~kdzhao]:
1: distcp now has a -direct option which avoids the copy
2. you need to enable path-style access and switch back to the v1 list API 
(there are options for those)

bq. .if I use "s3a://xyz" instead (no back slash at the end), then the error is 
like: ls: 's3a://xyz': No such file or directory.

that's an irritating feature of the FS shell. If you don't have any path in the 
URL, it says "your home directory", and so unless there's a /user/$myname path, 
404. Create the path or use trailing / entries. Nothing we can do to fix. I've 
thought about making the home dir of a bucket "/", but that might break 
existing things. Sorry. it annoys us all. repeatedly


> Uber-JIRA: Support AWS Snowball
> ---
>
> Key: HADOOP-14710
> URL: https://issues.apache.org/jira/browse/HADOOP-14710
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Major
>
> Support data transfer between Hadoop and [AWS 
> Snowball|http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-03-01 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16124:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-15620

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-03-01 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16124:

Target Version/s: 3.3.0, 3.2.1  (was: 3.2.1)

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-03-01 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781587#comment-16781587
 ] 

Steve Loughran commented on HADOOP-15625:
-

BTW, looking @ amazon snowball docs. They only serve up etags from files 
uploaded with MPU

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-01 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Description: 
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211

  was:
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-03-01 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781483#comment-16781483
 ] 

Takanobu Asanuma commented on HADOOP-16126:
---

Thanks for your +1, [~ajisakaa].

Committed to branch-2, branch-2.9 and branch-2.8.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-03-01 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-16126:
--
Fix Version/s: 2.9.3
   2.8.6
   2.10.0

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #536: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-01 Thread GitBox
elek commented on issue #536: HDDS-1136 : Add metric counters to capture the 
RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/536#issuecomment-468591872
 
 
   > I am not sure why jenkins is not able to apply my patch on trunk. I pulled 
from apache hadoop trunk and merged to my branch. I also verified 
https://github.com/apache/hadoop/pull/536.patch applies cleanly on apache 
hadoop trunk. @elek Is it possible jenkins is trying to apply individual 
commits rather than the squashed patch?
   
   Don't now. Could be a yetus bug. You can ask it onthe yetus dev list / jira. 
Or ask @aw-was-here 
   
   BTW, in the PR-s you can do force push. Github handles it very well, you 
don't need a new PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #529: HDDS-1191. Replace Ozone Rest client with S3 client in smoketests and docs

2019-03-01 Thread GitBox
elek closed pull request #529: HDDS-1191. Replace Ozone Rest client with S3 
client in smoketests and docs
URL: https://github.com/apache/hadoop/pull/529
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org