[GitHub] [hadoop] dineshchitlangia commented on issue #554: SUBMARINE-41:Fix ASF License warnings in Submarine

2019-03-04 Thread GitBox
dineshchitlangia commented on issue #554: SUBMARINE-41:Fix ASF License warnings 
in Submarine
URL: https://github.com/apache/hadoop/pull/554#issuecomment-469569084
 
 
   Ah! I found the problem. I will revise the changes and submit again.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #554: SUBMARINE-41:Fix ASF License warnings in Submarine

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #554: SUBMARINE-41:Fix ASF License warnings in 
Submarine
URL: https://github.com/apache/hadoop/pull/554#issuecomment-469566311
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 996 | trunk passed |
   | +1 | compile | 26 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 1711 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 21 | the patch passed |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | +1 | mvnsite | 22 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | hadoop-submarine-core in the patch passed. |
   | -1 | asflicense | 27 | The patch generated 11 ASF License warnings. |
   | | | 2758 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-554/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/554 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 5784a0b5c3d0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e40e2d6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-554/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-554/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 444 (vs. ulimit of 5500) |
   | modules | C: hadoop-submarine/hadoop-submarine-core U: 
hadoop-submarine/hadoop-submarine-core |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-554/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia opened a new pull request #554: SUBMARINE-41:Fix ASF License warnings in Submarine

2019-03-04 Thread GitBox
dineshchitlangia opened a new pull request #554: SUBMARINE-41:Fix ASF License 
warnings in Submarine
URL: https://github.com/apache/hadoop/pull/554
 
 
   Fix ASF License warnings in Submarine


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16163) NPE in setup/teardown of ITestAbfsDelegationTokens

2019-03-04 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16163:
-
Attachment: HADOOP-16163-001.patch

> NPE in setup/teardown of ITestAbfsDelegationTokens
> --
>
> Key: HADOOP-16163
> URL: https://issues.apache.org/jira/browse/HADOOP-16163
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16163-001.patch
>
>
> Managed to create this in a local (variant) branch: 
> {Code}
> [ERROR] Errors: 
> [ERROR] 
> org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens.testCanonicalization(org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens)
> [ERROR]   Run 1: ITestAbfsDelegationTokens.setup:107 » NullPointer
> [ERROR]   Run 2: ITestAbfsDelegationTokens.teardown:130 » NullPointer
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16136) ABFS: Should only transform username to short name

2019-03-04 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784133#comment-16784133
 ] 

Da Zhou commented on HADOOP-16136:
--

ah, yes, this needs to be backported to 3.2. Thanks.

> ABFS: Should only transform username to short name
> --
>
> Key: HADOOP-16136
> URL: https://issues.apache.org/jira/browse/HADOOP-16136
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16136-001.patch
>
>
> When short name is enabled, IdentityTransformer should only transform user 
> name to a short name, and the group name should remains.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16163) NPE in setup/teardown of ITestAbfsDelegationTokens

2019-03-04 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784132#comment-16784132
 ] 

Da Zhou edited comment on HADOOP-16163 at 3/5/19 6:17 AM:
--

This issue can be fixed by moving the test to non-parallel execution task.

{code:java}
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.524 s 
- in org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens

{code}



was (Author: danielzhou):
This issue can be fixed by moving the test to non-parallel execution task.

> NPE in setup/teardown of ITestAbfsDelegationTokens
> --
>
> Key: HADOOP-16163
> URL: https://issues.apache.org/jira/browse/HADOOP-16163
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16163-001.patch
>
>
> Managed to create this in a local (variant) branch: 
> {Code}
> [ERROR] Errors: 
> [ERROR] 
> org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens.testCanonicalization(org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens)
> [ERROR]   Run 1: ITestAbfsDelegationTokens.setup:107 » NullPointer
> [ERROR]   Run 2: ITestAbfsDelegationTokens.teardown:130 » NullPointer
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16163) NPE in setup/teardown of ITestAbfsDelegationTokens

2019-03-04 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784132#comment-16784132
 ] 

Da Zhou commented on HADOOP-16163:
--

This issue can be fixed by moving the test to non-parallel execution task.

> NPE in setup/teardown of ITestAbfsDelegationTokens
> --
>
> Key: HADOOP-16163
> URL: https://issues.apache.org/jira/browse/HADOOP-16163
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16163-001.patch
>
>
> Managed to create this in a local (variant) branch: 
> {Code}
> [ERROR] Errors: 
> [ERROR] 
> org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens.testCanonicalization(org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens)
> [ERROR]   Run 1: ITestAbfsDelegationTokens.setup:107 » NullPointer
> [ERROR]   Run 2: ITestAbfsDelegationTokens.teardown:130 » NullPointer
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784076#comment-16784076
 ] 

Hudson commented on HADOOP-16162:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16122 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16122/])
HADOOP-16162. Remove unused Job Summary Appender configurations from (aajisaka: 
rev fe7551f21bf38a1d66f8375d14b814a0dbd38003)
* (edit) hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> Remove unused Job Summary Appender configurations from log4j.properties
> ---
>
> Key: HADOOP-16162
> URL: https://issues.apache.org/jira/browse/HADOOP-16162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.3-alpha
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Major
>  Labels: CI, pull-request-available
> Fix For: 3.3.0
>
> Attachments: HADOOP-16162.1.patch
>
>
> The Job Summary Appender (JSA) is introduced in 
> [MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to 
> provide the summary information of the job's runtime. And this appender is 
> only referenced by the logger defined in 
> org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has 
> been removed in 
> [MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] 
> together with other M/R1 files. This appender is no longer used and I guess 
> we can remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16162:
--

Assignee: Chen Zhi

> Remove unused Job Summary Appender configurations from log4j.properties
> ---
>
> Key: HADOOP-16162
> URL: https://issues.apache.org/jira/browse/HADOOP-16162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.3-alpha
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Major
>  Labels: CI, pull-request-available
> Fix For: 3.3.0
>
> Attachments: HADOOP-16162.1.patch
>
>
> The Job Summary Appender (JSA) is introduced in 
> [MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to 
> provide the summary information of the job's runtime. And this appender is 
> only referenced by the logger defined in 
> org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has 
> been removed in 
> [MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] 
> together with other M/R1 files. This appender is no longer used and I guess 
> we can remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16162.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0

Committed to trunk. Thanks [~coder_chenzhi]!

> Remove unused Job Summary Appender configurations from log4j.properties
> ---
>
> Key: HADOOP-16162
> URL: https://issues.apache.org/jira/browse/HADOOP-16162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.3-alpha
>Reporter: Chen Zhi
>Priority: Major
>  Labels: CI, pull-request-available
> Fix For: 3.3.0
>
> Attachments: HADOOP-16162.1.patch
>
>
> The Job Summary Appender (JSA) is introduced in 
> [MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to 
> provide the summary information of the job's runtime. And this appender is 
> only referenced by the logger defined in 
> org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has 
> been removed in 
> [MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] 
> together with other M/R1 files. This appender is no longer used and I guess 
> we can remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit closed pull request #551: HADOOP-16162 Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread GitBox
asfgit closed pull request #551: HADOOP-16162 Remove unused Job Summary 
Appender configurations from log4j.properties
URL: https://github.com/apache/hadoop/pull/551
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #551: HADOOP-16162 Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread GitBox
aajisaka commented on issue #551: HADOOP-16162 Remove unused Job Summary 
Appender configurations from log4j.properties
URL: https://github.com/apache/hadoop/pull/551#issuecomment-469535469
 
 
   Committed. Thank you, @coder-chenzhi !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #551: HADOOP-16162 Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread GitBox
aajisaka commented on issue #551: HADOOP-16162 Remove unused Job Summary 
Appender configurations from log4j.properties
URL: https://github.com/apache/hadoop/pull/551#issuecomment-469534450
 
 
   LGTM, +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784022#comment-16784022
 ] 

Hadoop QA commented on HADOOP-16156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 3 unchanged - 16 fixed = 3 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 37s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestNetworkTopologyWithNodeGroup |
|   | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16156 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961087/HADOOP-16156.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5980294c572d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9fcd89a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16014/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16014/testReport/ |
| Max. process+thread 

[GitHub] [hadoop] bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-03-04 Thread GitBox
bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints 
for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-469516436
 
 
   +1 LGTM.
   I don't think test failures are related to this patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262329390
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml
 ##
 @@ -26,15 +26,15 @@ services:
   command: ["/opt/hadoop/bin/ozone","datanode"]
   env_file:
 - ./docker-config
-   ozoneManager:
+   om:
   image: apache/hadoop-runner
   privileged: true #required by the profiler
   volumes:
  - ../..:/opt/hadoop
   ports:
  - 9874:9874
   environment:
- ENSURE_OM_INITIALIZED: /data/metadata/ozoneManager/current/VERSION
+ ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
 
 Review comment:
   Here we have changed to om, will be this be same as service name when path 
is created?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469513593
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 88 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 83 | trunk passed |
   | +1 | shadedclient | 827 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 144 | trunk passed |
   | +1 | javadoc | 97 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 72 | the patch passed |
   | +1 | javac | 72 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 77 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 829 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 149 | the patch passed |
   | +1 | javadoc | 67 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 86 | common in the patch failed. |
   | +1 | unit | 68 | container-service in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3922 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 5c5da289cb7d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9fcd89a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/testReport/ |
   | Max. process+thread count | 313 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469513188
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 980 | trunk passed |
   | +1 | compile | 72 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 716 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 115 | trunk passed |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 67 | the patch passed |
   | +1 | javac | 67 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 61 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 128 | the patch passed |
   | +1 | javadoc | 62 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 62 | common in the patch failed. |
   | -1 | unit | 71 | container-service in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3464 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificates.TestCertificateSignRequest |
   |   | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   |   | hadoop.ozone.container.common.TestDatanodeStateMachine |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a6fe03809c56 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9fcd89a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262327659
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,44 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Executesudo yum install -y krb5-user
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+Executesudo apt-get install -y krb5-user
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches${result} 
(?<=awsSecret=).*
 
 Review comment:
   Can we remove this change from here? As this Jira purpose is not for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #553: HDDS-1216. Change 
name of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262327729
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +140,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+Should Be True ${rc} > 0
+
+Secure S3 test Success
+Run Keyword Setup credentials
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+
 
 Review comment:
   Can we remove S3 related changes from this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: HADOOP-16156.002.patch

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: (was: HADOOP-16156.002.patch)

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: HADOOP-16156.002.patch

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: (was: HADOOP-16156.002.patch)

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Summary: [Clean-up] Remove NULL check before instanceof and fix checkstyle 
in InnerNodeImpl  (was: [Clean-up] Remove NULL check before instanceof in 
InnerNodeImpl)

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: HADOOP-16156.002.patch

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-04 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783963#comment-16783963
 ] 

Shweta commented on HADOOP-16156:
-

patch v002 has checkstyle fixes.

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16167) "hadoop CLASSFILE" prints error messages on Ubuntu 18

2019-03-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783958#comment-16783958
 ] 

Eric Yang commented on HADOOP-16167:


Hadoop shell script uses indirection technique to evaluate and flatten strings 
for string manipulation.  This technique is discouraged with the discovery of 
shellshock vulnerability that trailing string can trigger unintended execution. 
 In Hadoop case, the evaluation is intended, but the technique is not 
recommended anymore due to non-deterministic outcome.  Most of the issues can 
be correct by double quote to prevent globbing and word splitting.  By briefly 
scanning through hadoop-functions.sh, there are dozen of functions that uses 
indirection instead of double quote to flatten string.  Majority of them need 
to be changed to double quotes and/or brace brackets to avoid instability.

> "hadoop CLASSFILE" prints error messages on Ubuntu 18
> -
>
> Key: HADOOP-16167
> URL: https://issues.apache.org/jira/browse/HADOOP-16167
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Priority: Major
>
> {noformat}
> # hadoop org.apache.hadoop.conf.Configuration
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2366: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2331: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
> /usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2426: 
> HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_OPTS: bad substitution
> {noformat}
> The issue is a regression in bash 4.4.  See 
> [here|http://savannah.gnu.org/support/?109649].  The extraneous output can 
> break scripts that read the command output.
> According to [~aw]:
> {quote}Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
> hdfs, etc) just needs some special handling when a custom method is being 
> called.  For example, there’s no point in checking to see if it should run 
> with privileges, so just skip over that.  Probably a few other places too.  
> Relatively easy fix.  2 lines of code, maybe.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-03-04 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783956#comment-16783956
 ] 

wujinhu commented on HADOOP-15038:
--

OK,thanks, [~ste...@apache.org] Please let me know when you think it's time to 
share across modules, then I will continue to do this work.

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469495030
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/547 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses certificate issued by SCM.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #547: HDDS-594. SCM CA: DN sends CSR and uses 
certificate issued by SCM.
URL: https://github.com/apache/hadoop/pull/547#issuecomment-469494762
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/547 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/547 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-547/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16166) TestRawLocalFileSystemContract fails with build Docker container running on Mac

2019-03-04 Thread Matt Foley (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783920#comment-16783920
 ] 

Matt Foley commented on HADOOP-16166:
-

Yes, which is exactly what testFilesystemIsCaseSensitive() tests :D That would 
get a little tautological...
Probably best to stick with current model of identifying which systems we 
expect to be case-(in)sensitive, or not.
I'm putting together a patch using `df .` which returns "osxfs" in the first 
field, for the case of interest.

> TestRawLocalFileSystemContract fails with build Docker container running on 
> Mac
> ---
>
> Key: HADOOP-16166
> URL: https://issues.apache.org/jira/browse/HADOOP-16166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Matt Foley
>Priority: Minor
>
> The Mac has a case-insensitive filesystem.  When using the recommended build 
> Docker container via `start-build-env.sh`, the container attaches to the Mac 
> FS to share the local git repository for Hadoop.  Which is very nice and 
> convenient.
> This means the TestRawLocalFileSystemContract#testFilesystemIsCaseSensitive() 
> test case (which is inherited from FileSystemContractBaseTest) should be 
> skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
> because @Override TestRawLocalFileSystemContract#filesystemIsCaseSensitive() 
> does not take into account the possibility of a Linux OS mounting a MacOS 
> filesystem.
> The fix would extend 
> TestRawLocalFileSystemContract#filesystemIsCaseSensitive() to recognize this 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783914#comment-16783914
 ] 

Hadoop QA commented on HADOOP-16140:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 95 unchanged - 0 fixed = 96 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961061/HADOOP-14200.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d8fa1baf9424 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb0fa0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16013/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16013/testReport/ |
| Max. process+thread count | 1382 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16013/console |
| Powered by 

[GitHub] [hadoop] bharatviswa504 commented on issue #548: Revert "HDDS-1072. Implement RetryProxy and FailoverProxy for OM clie…

2019-03-04 Thread GitBox
bharatviswa504 commented on issue #548: Revert "HDDS-1072. Implement RetryProxy 
and FailoverProxy for OM clie…
URL: https://github.com/apache/hadoop/pull/548#issuecomment-469475348
 
 
   This has been already done in trunk.
   
   commit b18c1c22ea238c4b783031402496164f0351b531
   Author: Hanisha Koneru 
   Date:   Fri Mar 1 20:05:12 2019 -0800
   
   Revert "HDDS-1072. Implement RetryProxy and FailoverProxy for OM client.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #548: Revert "HDDS-1072. Implement RetryProxy and FailoverProxy for OM clie…

2019-03-04 Thread GitBox
bharatviswa504 closed pull request #548: Revert "HDDS-1072. Implement 
RetryProxy and FailoverProxy for OM clie…
URL: https://github.com/apache/hadoop/pull/548
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783888#comment-16783888
 ] 

Hudson commented on HADOOP-16148:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16121 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16121/])
HADOOP-16148. Cleanup LineReader Unit Test. (stevel: rev 
9fcd89ab9345174e41d3684b94fc5f9d03cb4377)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestLineReader.java


> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-03-04 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783884#comment-16783884
 ] 

Steve Loughran commented on HADOOP-15625:
-

* s3a doesn't work with snowball at all, I do know of some interest in distcp 
to it working, with the result then being fedexed to the final Se3 store. The 
support needed will be minimal

A key thing: we have to plan for some endpoints not having etags. And if 
someone went from unversioned to versioned, the may be unversioned files in the 
store

* S3Select. Good point. For the s3guard integration, maybe the strategy will 
be: do a HEAD until either the etag matches or a stability threshold has been 
reached, but not worry about changes during a read. Be interesting to 
experiment to see what happens...you could probably infer something from that 
behaviour. (e.g some cursor on the read for the paged lists, every time you ask 
for a new page it scans some more.

Not had a chance to look @ your patch yet, will do it tomorrow. 


> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16167) "hadoop CLASSFILE" prints error messages on Ubuntu 18

2019-03-04 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-16167:
-

 Summary: "hadoop CLASSFILE" prints error messages on Ubuntu 18
 Key: HADOOP-16167
 URL: https://issues.apache.org/jira/browse/HADOOP-16167
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.2.0
Reporter: Daniel Templeton


{noformat}
# hadoop org.apache.hadoop.conf.Configuration
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2366: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2331: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_USER: bad substitution
/usr/lib/hadoop/bin/../lib/hadoop/libexec//hadoop-functions.sh: line 2426: 
HADOOP_ORG.APACHE.HADOOP.CONF.CONFIGURATION_OPTS: bad substitution
{noformat}

The issue is a regression in bash 4.4.  See 
[here|http://savannah.gnu.org/support/?109649].  The extraneous output can 
break scripts that read the command output.

According to [~aw]:

{quote}Oh, I think I see the bug.  HADOOP_SUBCMD (and equivalents in yarn, 
hdfs, etc) just needs some special handling when a custom method is being 
called.  For example, there’s no point in checking to see if it should run with 
privileges, so just skip over that.  Probably a few other places too.  
Relatively easy fix.  2 lines of code, maybe.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-469464994
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1213 | trunk passed |
   | +1 | compile | 1545 | trunk passed |
   | +1 | checkstyle | 246 | trunk passed |
   | +1 | mvnsite | 161 | trunk passed |
   | +1 | shadedclient | 1271 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 188 | trunk passed |
   | +1 | javadoc | 119 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-aws in the patch failed. |
   | -1 | compile | 1315 | root in the patch failed. |
   | -1 | javac | 1315 | root in the patch failed. |
   | -0 | checkstyle | 236 | root: The patch generated 3 new + 10 unchanged - 0 
fixed = 13 total (was 10) |
   | -1 | mvnsite | 46 | hadoop-aws in the patch failed. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 835 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 42 | hadoop-aws in the patch failed. |
   | +1 | javadoc | 109 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 569 | hadoop-common in the patch failed. |
   | -1 | unit | 47 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 8285 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestRPC |
   |   | hadoop.ipc.TestCallQueueManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/539 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 39a4f52a9a2d 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/testReport/ |
   | Max. process+thread count | 793 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #537: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread GitBox
avijayanhwx commented on issue #537: HDDS-1136 : Add metric counters to capture 
the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537#issuecomment-469463610
 
 
   Patch committed to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx closed pull request #537: HDDS-1136 : Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread GitBox
avijayanhwx closed pull request #537: HDDS-1136 : Add metric counters to 
capture the RocksDB checkpointing statistics.
URL: https://github.com/apache/hadoop/pull/537
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16148) Cleanup LineReader Unit Test

2019-03-04 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16148:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

+1
committed. 

thanks! it's these little code cleanups which benefit people later -even if its 
not immediately obvious. The next person who tries to debug record readers will 
be grateful

> Cleanup LineReader Unit Test
> 
>
> Key: HADOOP-16148
> URL: https://issues.apache.org/jira/browse/HADOOP-16148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HADOOP-16148.1.patch, HADOOP-16148.2.patch
>
>
> I was trying to track down a bug and thought it might be coming from the 
> {{LineReader}} class.  It wasn't.  However, I did clean up the unit test for 
> this class a bit.  I figured I might as well at least post the diff file here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16166) TestRawLocalFileSystemContract fails with build Docker container running on Mac

2019-03-04 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783863#comment-16783863
 ] 

Steve Loughran commented on HADOOP-16166:
-

hey, we could make it clever: create new temp file with a unique name, stat it 
with the same name in upper case

> TestRawLocalFileSystemContract fails with build Docker container running on 
> Mac
> ---
>
> Key: HADOOP-16166
> URL: https://issues.apache.org/jira/browse/HADOOP-16166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Matt Foley
>Priority: Minor
>
> The Mac has a case-insensitive filesystem.  When using the recommended build 
> Docker container via `start-build-env.sh`, the container attaches to the Mac 
> FS to share the local git repository for Hadoop.  Which is very nice and 
> convenient.
> This means the TestRawLocalFileSystemContract#testFilesystemIsCaseSensitive() 
> test case (which is inherited from FileSystemContractBaseTest) should be 
> skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
> because @Override TestRawLocalFileSystemContract#filesystemIsCaseSensitive() 
> does not take into account the possibility of a Linux OS mounting a MacOS 
> filesystem.
> The fix would extend 
> TestRawLocalFileSystemContract#filesystemIsCaseSensitive() to recognize this 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16165.
-
Resolution: Invalid

this isn't the way to ask questions. Get on the common-dev list & make queries 
there. 

Or even better: run some experiments

> S3A connector - are multiple SSE-KMS keys supported within same bucket?
> ---
>
> Key: HADOOP-16165
> URL: https://issues.apache.org/jira/browse/HADOOP-16165
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: tools
>Reporter: t oo
>Priority: Major
>
> Within a single s3 bucket i have 2 objects:
> s3a://bucketabc/a/b/c/object1
> s3a://bucketabc/a/b/c/object2
> object1 is encrypted with sse-kms (kms key1)
> object2 is encrypted with sse-kms (kms key2)
> The 2 objects are not encrypted with a common kms key! But they are in the 
> same s3 bucket.
>  
> [~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys 
> so that it can read the data (ie want to use hive/spark to read from s3) from 
> diff objects within same bucket when those objects were encrypted with diff 
> keys? .
> [https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]
>  
>  
>  fs.s3a.server-side-encryption.key
>  
> arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01,
>  arn:aws:kms:us-west-2:360379543683:key/vjsnhdjksd
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469454263
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1073 | trunk passed |
   | +1 | compile | 43 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 641 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 42 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 132 | server-scm in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3049 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/534 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a39858692bb7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/testReport/ |
   | Max. process+thread count | 560 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv merged pull request #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-03-04 Thread GitBox
ajayydv merged pull request #523: HDDS-623. On SCM UI, Node Manager info is 
empty
URL: https://github.com/apache/hadoop/pull/523
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-03-04 Thread GitBox
ajayydv commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-469449649
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469446813
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 964 | trunk passed |
   | +1 | compile | 46 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 703 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 40 | trunk passed |
   | +1 | javadoc | 22 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 713 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 129 | server-scm in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2974 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.chillmode.TestSCMChillModeManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/534 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 29f0611fd368 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-534/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-04 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783835#comment-16783835
 ] 

Stephen O'Donnell commented on HADOOP-16140:


I have uploaded the v5 patch to add the code block around the new test back in. 
This will raise a checkstyle warning, but I think its OK as it keeps with the 
convention already in that section of the code.

Assuming the latest patch runs OK, I think this one is ready for wider review 
and hopefully we can commit it if everyone is happy.

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HADOOP-14200.005.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #530: HADOOP-16058 S3A tests to include Terasort

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #530: HADOOP-16058 S3A tests to include Terasort
URL: https://github.com/apache/hadoop/pull/530#issuecomment-469444867
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/530 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/530 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-530/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #531: HADOOP-15961 Add PathCapabilities to FS and FC to complement StreamCapabilities

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #531: HADOOP-15961 Add PathCapabilities to FS 
and FC to complement StreamCapabilities
URL: https://github.com/apache/hadoop/pull/531#issuecomment-469444529
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/531 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/531 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-531/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-03-04 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HADOOP-16140:
---
Attachment: HADOOP-14200.005.patch

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HADOOP-14200.002.patch, HADOOP-14200.003.patch, 
> HADOOP-14200.004.patch, HADOOP-14200.005.patch, HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
bharatviswa504 commented on issue #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#issuecomment-469438577
 
 
   Thank You @ajayydv  for offline discussion.
   Updated the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name 
of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260773
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -78,7 +78,7 @@ execute_tests(){
  TITLE="Ozone $TEST tests with $COMPOSE_DIR cluster"
  set +e
  OUTPUT_NAME="$COMPOSE_DIR-${TEST//\//_}"
- docker-compose -f "$COMPOSE_FILE" exec -T ozoneManager python -m 
robot --log NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
+ docker-compose -f "$COMPOSE_FILE" exec -T om python -m robot --log 
NONE --report NONE "${OZONE_ROBOT_OPTS[@]}" --output 
"smoketest/$RESULT_DIR/robot-$OUTPUT_NAME.xml" --logtitle "$TITLE" 
--reporttitle "$TITLE" "smoketest/$TEST"
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name 
of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260766
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -16,14 +16,44 @@
 *** Settings ***
 Documentation   Smoke test to start cluster with docker-compose 
environments.
 Library OperatingSystem
+Library String
 Resource../commonlib.robot
 
+*** Variables ***
+${ENDPOINT_URL}   http://s3g:9878
+
+*** Keywords ***
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+Executesudo yum install -y krb5-user
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+Executesudo apt-get install -y krb5-user
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Setup credentials
+${hostname}=Executehostname
+Execute kinit -k testuser/${hostname}@EXAMPLE.COM -t 
/etc/security/keytabs/testuser.keytab
+${result} = Executeozone sh s3 getsecret
+${accessKey} =  Get Regexp Matches ${result} 
(?<=awsAccessKey=).*
+${secret} = Get Regexp Matches${result} 
(?<=awsSecret=).*
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #553: HDDS-1216. Change name of ozoneManager 
service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#issuecomment-469437517
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 20 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1091 | trunk passed |
   | -1 | compile | 24 | dist in trunk failed. |
   | -1 | mvnsite | 26 | dist in trunk failed. |
   | +1 | shadedclient | 706 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | -1 | compile | 20 | dist in the patch failed. |
   | -1 | javac | 20 | dist in the patch failed. |
   | -1 | mvnsite | 21 | dist in the patch failed. |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | shelldocs | 15 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 838 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 22 | dist in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3001 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/553 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  shellcheck  shelldocs  |
   | uname | Linux 1ea77581ba55 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cb0fa0c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/branch-compile-hadoop-ozone_dist.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/branch-mvnsite-hadoop-ozone_dist.txt
 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-compile-hadoop-ozone_dist.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-compile-hadoop-ozone_dist.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-mvnsite-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/artifact/out/patch-unit-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-553/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #553: HDDS-1216. Change name 
of ozoneManager service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553#discussion_r262260757
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
 ##
 @@ -107,5 +140,16 @@ Run ozoneFS tests
 Execute   ls -l GET.txt
 ${rc}  ${result} =  Run And Return Rc And Outputozone fs -ls 
o3fs://abcde.pqrs/
 Should Be Equal As Integers ${rc}1
-Should contain${result} VOLUME_NOT_FOUND
+Should contain${result} not found
+
+
+Secure S3 test Failure
+Run Keyword Install aws cli
+${rc}  ${result} =  Run And Return Rc And Output  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+Should Be True ${rc} > 0
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15720) rpcTimeout may not have been applied correctly

2019-03-04 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783807#comment-16783807
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-15720:
--

Hi [~yzhangal], if the bug is no rpc timeout, it should be very easy to test 
the bug.  Why don't we illustrate it in a unit test?

> rpcTimeout may not have been applied correctly
> --
>
> Key: HADOOP-15720
> URL: https://issues.apache.org/jira/browse/HADOOP-15720
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Yongjun Zhang
>Priority: Major
>
> org.apache.hadoop.ipc.Client send multiple RPC calls to server synchronously 
> via the same connection as in the following synchronized code block:
> {code:java}
>   synchronized (sendRpcRequestLock) {
> Future senderFuture = sendParamsExecutor.submit(new Runnable() {
>   @Override
>   public void run() {
> try {
>   synchronized (Connection.this.out) {
> if (shouldCloseConnection.get()) {
>   return;
> }
> 
> if (LOG.isDebugEnabled()) {
>   LOG.debug(getName() + " sending #" + call.id
>   + " " + call.rpcRequest);
> }
>  
> byte[] data = d.getData();
> int totalLength = d.getLength();
> out.writeInt(totalLength); // Total Length
> out.write(data, 0, totalLength);// RpcRequestHeader + 
> RpcRequest
> out.flush();
>   }
> } catch (IOException e) {
>   // exception at this point would leave the connection in an
>   // unrecoverable state (eg half a call left on the wire).
>   // So, close the connection, killing any outstanding calls
>   markClosed(e);
> } finally {
>   //the buffer is just an in-memory buffer, but it is still 
> polite to
>   // close early
>   IOUtils.closeStream(d);
> }
>   }
> });
>   
> try {
>   senderFuture.get();
> } catch (ExecutionException e) {
>   Throwable cause = e.getCause();
>   
>   // cause should only be a RuntimeException as the Runnable above
>   // catches IOException
>   if (cause instanceof RuntimeException) {
> throw (RuntimeException) cause;
>   } else {
> throw new RuntimeException("unexpected checked exception", cause);
>   }
> }
>   }
> {code}
> And it then waits for result asynchronously via
> {code:java}
> /* Receive a response.
>  * Because only one receiver, so no synchronization on in.
>  */
> private void receiveRpcResponse() {
>   if (shouldCloseConnection.get()) {
> return;
>   }
>   touch();
>   
>   try {
> int totalLen = in.readInt();
> RpcResponseHeaderProto header = 
> RpcResponseHeaderProto.parseDelimitedFrom(in);
> checkResponse(header);
> int headerLen = header.getSerializedSize();
> headerLen += CodedOutputStream.computeRawVarint32Size(headerLen);
> int callId = header.getCallId();
> if (LOG.isDebugEnabled())
>   LOG.debug(getName() + " got value #" + callId);
> Call call = calls.get(callId);
> RpcStatusProto status = header.getStatus();
> ..
> {code}
> However, we can see that the {{call}} returned by {{receiveRpcResonse()}} 
> above may be in any order.
> The following code
> {code:java}
> int totalLen = in.readInt();
> {code}
> eventually calls one of the following two methods, where rpcTimeOut is 
> checked against:
> {code:java}
>   /** Read a byte from the stream.
>* Send a ping if timeout on read. Retries if no failure is detected
>* until a byte is read.
>* @throws IOException for any IO problem other than socket timeout
>*/
>   @Override
>   public int read() throws IOException {
> int waiting = 0;
> do {
>   try {
> return super.read();
>   } catch (SocketTimeoutException e) {
> waiting += soTimeout;
> handleTimeout(e, waiting);
>   }
> } while (true);
>   }
>   /** Read bytes into a buffer starting from offset off
>* Send a ping if timeout on read. Retries if no failure is detected
>* until a byte is read.
>* 
>* @return the total number of bytes read; -1 if the connection is 
> closed.
>*/
>   @Override
>   public int read(byte[] buf, int off, int len) throws IOException {
> int waiting = 0;
> do {
>   try 

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262253124
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/DataNodeChillModeRule.java
 ##
 @@ -62,18 +65,37 @@ public double getRegisteredDataNodes() {
 
   @Override
   public void process(NodeRegistrationContainerReport reportsProto) {
-if (requiredDns == 0) {
-  // No dn check required.
+
+registeredDnSet.add(reportsProto.getDatanodeDetails().getUuid());
+registeredDns = registeredDnSet.size();
+
+  }
+
+  @Override
+  public void onMessage(NodeRegistrationContainerReport
+  nodeRegistrationContainerReport, EventPublisher publisher) {
+// TODO: when we have remove handlers, we can remove getInChillmode check
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262253079
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java
 ##
 @@ -96,12 +92,36 @@ public void process(NodeRegistrationContainerReport 
reportsProto) {
 }
   }
 });
+  }
+
+  @Override
+  public void onMessage(NodeRegistrationContainerReport
+  nodeRegistrationContainerReport, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
 
 Review comment:
   Updated the code with slight change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262250483
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java
 ##
 @@ -96,12 +92,36 @@ public void process(NodeRegistrationContainerReport 
reportsProto) {
 }
   }
 });
+  }
+
+  @Override
+  public void onMessage(NodeRegistrationContainerReport
+  nodeRegistrationContainerReport, EventPublisher publisher) {
+
+// TODO: when we have remove handlers, we can remove getInChillmode check
+if (chillModeManager.getInChillMode()) {
+  if (validate()) {
 
 Review comment:
   This check is added, suppose this rule is already satisfied, I don't need to 
process it, so that is the reason for the first check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor ContainerChillModeRule and DatanodeChillMode rule.

2019-03-04 Thread GitBox
bharatviswa504 commented on a change in pull request #534: HDDS-1193. Refactor 
ContainerChillModeRule and DatanodeChillMode rule.
URL: https://github.com/apache/hadoop/pull/534#discussion_r262249732
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java
 ##
 @@ -84,10 +84,6 @@ public double getCurrentContainerThreshold() {
 
   @Override
   public void process(NodeRegistrationContainerReport reportsProto) {
-if (maxContainer == 0) {
 
 Review comment:
   In validate(), we call getCurrentContainerThreshold(),
   
   ```
public double getCurrentContainerThreshold() {
   if (maxContainer == 0) {
 return 1;
   }
   return (containerWithMinReplicas.doubleValue() / maxContainer);
 }
   
 public boolean validate() {
   return getCurrentContainerThreshold() >= chillModeCutoff;
 }
   ```
   
   So, if maxContainer=0, in validate() call, we get 1 from 
getCurrentContainerThreshold() and do >= check. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #485: HDFS-14244. Refactor the libhdfspp cmake build files.

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #485: HDFS-14244. Refactor the libhdfspp cmake 
build files.
URL: https://github.com/apache/hadoop/pull/485#issuecomment-469426445
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 20 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1096 | trunk passed |
   | +1 | compile | 103 | trunk passed |
   | +1 | mvnsite | 23 | trunk passed |
   | +1 | shadedclient | 727 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 16 | the patch passed |
   | -1 | compile | 28 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | cc | 28 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | javac | 28 | hadoop-hdfs-native-client in the patch failed. |
   | +1 | mvnsite | 19 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 13 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 224  line(s) with tabs. |
   | +1 | shadedclient | 800 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 32 | hadoop-hdfs-native-client in the patch failed. |
   | -1 | asflicense | 29 | The patch generated 2 ASF License warnings. |
   | | | 3155 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/485 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  cc  shellcheck  shelldocs  |
   | uname | Linux 8df78f00f393 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 10b802b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 338 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-485/37/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #553: HDDS-1216. Change name of ozoneManager service in docker compose file…

2019-03-04 Thread GitBox
ajayydv opened a new pull request #553: HDDS-1216. Change name of ozoneManager 
service in docker compose file…
URL: https://github.com/apache/hadoop/pull/553
 
 
   …s to om. Contributed by Ajay Kumar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #552: HDDS-1156. testDelegationToken is failing in TestSecureOzoneCluster. …

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #552: HDDS-1156. testDelegationToken is failing 
in TestSecureOzoneCluster. …
URL: https://github.com/apache/hadoop/pull/552#issuecomment-469413292
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 79 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1109 | trunk passed |
   | -1 | compile | 99 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 67 | trunk passed |
   | +1 | shadedclient | 797 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 42 | trunk passed |
   | +1 | javadoc | 37 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 64 | the patch passed |
   | -1 | compile | 91 | hadoop-ozone in the patch failed. |
   | -1 | javac | 91 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 22 | the patch passed |
   | +1 | mvnsite | 53 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 804 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 52 | the patch passed |
   | +1 | javadoc | 35 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 47 | ozone-manager in the patch passed. |
   | -1 | unit | 803 | integration-test in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4356 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/552 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 81096b77024c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 387dbe5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | v3.1.0-RC1 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/testReport/ |
   | Max. process+thread count | 4282 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-552/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16166) TestRawLocalFileSystemContract fails with build Docker container running on Mac

2019-03-04 Thread Matt Foley (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-16166:

Description: 
The Mac has a case-insensitive filesystem.  When using the recommended build 
Docker container via `start-build-env.sh`, the container attaches to the Mac FS 
to share the local git repository for Hadoop.  Which is very nice and 
convenient.

This means the TestRawLocalFileSystemContract#testFilesystemIsCaseSensitive() 
test case (which is inherited from FileSystemContractBaseTest) should be 
skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
because @Override TestRawLocalFileSystemContract#filesystemIsCaseSensitive() 
does not take into account the possibility of a Linux OS mounting a MacOS 
filesystem.

The fix would extend TestRawLocalFileSystemContract#filesystemIsCaseSensitive() 
to recognize this case.

  was:
The Mac has a case-insensitive filesystem.  When using the recommended build 
Docker container via `start-build-env.sh`, the container attaches to the Mac FS 
to share the local git repository for Hadoop.  Which is very nice and 
convenient.

This means the TestRawLocalFileSystemContract::testFilesystemIsCaseSensitive() 
test case (which is inherited from FileSystemContractBaseTest) should be 
skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
because @Override TestRawLocalFileSystemContract::filesystemIsCaseSensitive() 
does not take into account the possibility of a Linux OS mounting a MacOS 
filesystem.

The fix would extend 
TestRawLocalFileSystemContract::filesystemIsCaseSensitive() to recognize this 
case.


> TestRawLocalFileSystemContract fails with build Docker container running on 
> Mac
> ---
>
> Key: HADOOP-16166
> URL: https://issues.apache.org/jira/browse/HADOOP-16166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Matt Foley
>Priority: Minor
>
> The Mac has a case-insensitive filesystem.  When using the recommended build 
> Docker container via `start-build-env.sh`, the container attaches to the Mac 
> FS to share the local git repository for Hadoop.  Which is very nice and 
> convenient.
> This means the TestRawLocalFileSystemContract#testFilesystemIsCaseSensitive() 
> test case (which is inherited from FileSystemContractBaseTest) should be 
> skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
> because @Override TestRawLocalFileSystemContract#filesystemIsCaseSensitive() 
> does not take into account the possibility of a Linux OS mounting a MacOS 
> filesystem.
> The fix would extend 
> TestRawLocalFileSystemContract#filesystemIsCaseSensitive() to recognize this 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16166) TestRawLocalFileSystemContract fails with build Docker container running on Mac

2019-03-04 Thread Matt Foley (JIRA)
Matt Foley created HADOOP-16166:
---

 Summary: TestRawLocalFileSystemContract fails with build Docker 
container running on Mac
 Key: HADOOP-16166
 URL: https://issues.apache.org/jira/browse/HADOOP-16166
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.0
Reporter: Matt Foley


The Mac has a case-insensitive filesystem.  When using the recommended build 
Docker container via `start-build-env.sh`, the container attaches to the Mac FS 
to share the local git repository for Hadoop.  Which is very nice and 
convenient.

This means the TestRawLocalFileSystemContract::testFilesystemIsCaseSensitive() 
test case (which is inherited from FileSystemContractBaseTest) should be 
skipped.  It fails to be skipped, and therefore throws a Unit Test failure, 
because @Override TestRawLocalFileSystemContract::filesystemIsCaseSensitive() 
does not take into account the possibility of a Linux OS mounting a MacOS 
filesystem.

The fix would extend 
TestRawLocalFileSystemContract::filesystemIsCaseSensitive() to recognize this 
case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #552: HDDS-1156. testDelegationToken is failing in TestSecureOzoneCluster. …

2019-03-04 Thread GitBox
xiaoyuyao commented on issue #552: HDDS-1156. testDelegationToken is failing in 
TestSecureOzoneCluster. …
URL: https://github.com/apache/hadoop/pull/552#issuecomment-469398481
 
 
   Thanks @ajayydv  for fixing this. Change LGTM, +1 pending Jenkins.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #552: HDDS-1156. testDelegationToken is failing in TestSecureOzoneCluster. …

2019-03-04 Thread GitBox
ajayydv opened a new pull request #552: HDDS-1156. testDelegationToken is 
failing in TestSecureOzoneCluster. …
URL: https://github.com/apache/hadoop/pull/552
 
 
   …Contributed by Ajay Kumar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
steveloughran commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-469386030
 
 
   hadoop-common test run => failure. hadoop-aws compilation failure with a 
file which has been deleted


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #533: HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden

2019-03-04 Thread GitBox
steveloughran commented on issue #533: HADOOP-14630  Contract Tests to verify 
create, mkdirs and rename under a file is forbidden
URL: https://github.com/apache/hadoop/pull/533#issuecomment-469379568
 
 
   Whitespace is from an error file. I deny (direct) responsibility
   ```
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:79:68000-68058
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:80:68058-75558
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:81:75558-75580
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:82:75580-7c000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:83:7f5d7ff54000-7f5d7ff57000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:84:7f5d7ff57000-7f5d805ff000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:85:7f5d805ff000-7f5d80602000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:86:7f5d80602000-7f5d80ca9000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:87:7f5d80ca9000-7f5d80cab000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:88:7f5d80cab000-7f5d80fff000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:89:7f5d80fff000-7f5d8100
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:90:7f5d8100-7f5d8127
 rwxp  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:91:7f5d8127-7f5d9000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:92:7f5d9000-7f5d9003b000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:93:7f5d9003b000-7f5d9400
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:94:7f5d9435-7f5d94351000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:95:7f5d94351000-7f5d94451000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:96:7f5d94451000-7f5d94452000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:97:7f5d94452000-7f5d94552000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:98:7f5d94552000-7f5d94553000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:99:7f5d94553000-7f5d9465d000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:100:7f5d9465d000-7f5d94a13000
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:109:7f5d94e28000-7f5d94e2e000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:118:7f5d95251000-7f5d95253000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:146:7f5d96341000-7f5d96345000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:151:7f5d97263000-7f5d97293000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:156:7f5d974ac000-7f5d974b
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:169:7f5d97c94000-7f5d97c98000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:176:7f5d97fbd000-7f5d97fc
 ---p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:177:7f5d97fc-7f5d980c2000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:178:7f5d980c9000-7f5d980ca000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:179:7f5d980ca000-7f5d980cb000
 r--p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:180:7f5d980cb000-7f5d980cc000
 rw-p  00:00 0 
   
./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:183:7f5d980ce000-7f5d980cf000
 rw-p  00:00 0 
   ./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:190:jvm_args: 
-Dapplication.home=/usr/lib/jvm/java-8-openjdk-amd64 -Xms8m 
   ./hadoop-hdfs-project/hadoop-hdfs-client/hs_err_pid24650.log:222:libc:glibc 
2.23 NPTL 2.23 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Created] (HADOOP-16164) S3aDelegationTokens to add accessor for tests to get at the token binding

2019-03-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16164:
---

 Summary: S3aDelegationTokens to add accessor for tests to get at 
the token binding
 Key: HADOOP-16164
 URL: https://issues.apache.org/jira/browse/HADOOP-16164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


For testing, it turns out to be useful to get at the current token binding in 
the S3ADelegationTokens instance of a filesystem.

provide an accessor, tagged as for testing only



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783633#comment-16783633
 ] 

Gabor Bota edited comment on HADOOP-15999 at 3/4/19 6:39 PM:
-

It was really me - so running the tests in my ide with the setting:
{noformat}
  
fs.s3a.s3guard.test.implementation
local
  
{noformat}
Running the same test with *dynamo* everything passes. 
Turned out the reason for *NPE*s when using local was we had the issue with the 
reference for the localms again. When we rebuild the fs or build a new fs 
instance we have to set the same cache and the NPEs are gone.

After fixing the NPEs the next issue is 
{{java.util.concurrent.ExecutionException: java.io.FileNotFoundException:}} - 
only for *local* again.
In {{expectExceptionWhenReadingOpenFileAPI}} when the following is called:
{code:java}
  try (FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()) {
  intercept(FileNotFoundException.class, () -> {
byte[] bytes = new byte[text.length()];
return in.read(bytes, 0, bytes.length);
  });
}
{code}
The *{{FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()}}* 
throws *FNFE*, and that's even before it's expected. That means there's 
something wrong going on with open file API is used. I don't have a clue right 
now why would this happen just when using local and not when using dynamo, but 
I need to figure it out.


was (Author: gabor.bota):
It was really me - so running the tests in my ide with the setting:
{noformat}
  
fs.s3a.s3guard.test.implementation
local
  
{noformat}
Running the same test with *dynamo* everything passes. 
Turned out the reason for *NPE*s when using local was we had the issue with the 
reference for the localms again. When we rebuild the fs or build a new fs 
instance we have to set the same cache and the NPEs are gone.

After fixing the NPEs the next issue is 
{{java.util.concurrent.ExecutionException: java.io.FileNotFoundException:}} - 
only for *local* again.
In {{expectExceptionWhenReadingOpenFileAPI}} when the following is called:
{code:java}
  try (FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()) {
  intercept(FileNotFoundException.class, () -> {
byte[] bytes = new byte[text.length()];
return in.read(bytes, 0, bytes.length);
  });
}
{code}
The *{{FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()}}* 
throws *FNFE*, and that's even before it's expected. That means there's 
something wrong going on with open file API is used. I don't have a clue right 
now why would this happen just when using local and not when using dynamo, but 
I need to figure it out.

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more 

[jira] [Updated] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread t oo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

t oo updated HADOOP-16165:
--
Description: 
Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (kms key1)

object2 is encrypted with sse-kms (kms key2)

The 2 objects are not encrypted with a common kms key! But they are in the same 
s3 bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 
 
 fs.s3a.server-side-encryption.key
 
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01,
 arn:aws:kms:us-west-2:360379543683:key/vjsnhdjksd
 

  was:
Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (kms key1)

object2 is encrypted with sse-kms (kms key2)

The 2 objects are not encrypted with a common kms key! But they are in the same 
s3 bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 
 
 fs.s3a.server-side-encryption.key
 
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01
 


> S3A connector - are multiple SSE-KMS keys supported within same bucket?
> ---
>
> Key: HADOOP-16165
> URL: https://issues.apache.org/jira/browse/HADOOP-16165
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: tools
>Reporter: t oo
>Priority: Major
>
> Within a single s3 bucket i have 2 objects:
> s3a://bucketabc/a/b/c/object1
> s3a://bucketabc/a/b/c/object2
> object1 is encrypted with sse-kms (kms key1)
> object2 is encrypted with sse-kms (kms key2)
> The 2 objects are not encrypted with a common kms key! But they are in the 
> same s3 bucket.
>  
> [~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys 
> so that it can read the data (ie want to use hive/spark to read from s3) from 
> diff objects within same bucket when those objects were encrypted with diff 
> keys? .
> [https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]
>  
>  
>  fs.s3a.server-side-encryption.key
>  
> arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01,
>  arn:aws:kms:us-west-2:360379543683:key/vjsnhdjksd
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread t oo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

t oo updated HADOOP-16165:
--
Description: 
Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (kms key1)

object2 is encrypted with sse-kms (kms key2)

The 2 objects are not encrypted with a common kms key! But they are in the same 
s3 bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 
 
 fs.s3a.server-side-encryption.key
 
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01
 

  was:
Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (key1)

object2 is encrypted with sse-kms (key2)

The 2 objects are not encrypted with a common key! But they are in the same s3 
bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 

  fs.s3a.server-side-encryption.key
  
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01



> S3A connector - are multiple SSE-KMS keys supported within same bucket?
> ---
>
> Key: HADOOP-16165
> URL: https://issues.apache.org/jira/browse/HADOOP-16165
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: tools
>Reporter: t oo
>Priority: Major
>
> Within a single s3 bucket i have 2 objects:
> s3a://bucketabc/a/b/c/object1
> s3a://bucketabc/a/b/c/object2
> object1 is encrypted with sse-kms (kms key1)
> object2 is encrypted with sse-kms (kms key2)
> The 2 objects are not encrypted with a common kms key! But they are in the 
> same s3 bucket.
>  
> [~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys 
> so that it can read the data (ie want to use hive/spark to read from s3) from 
> diff objects within same bucket when those objects were encrypted with diff 
> keys? .
> [https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]
>  
>  
>  fs.s3a.server-side-encryption.key
>  
> arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16165) S3A connector - are multiple SSE-KMS keys supported within same bucket?

2019-03-04 Thread t oo (JIRA)
t oo created HADOOP-16165:
-

 Summary: S3A connector - are multiple SSE-KMS keys supported 
within same bucket?
 Key: HADOOP-16165
 URL: https://issues.apache.org/jira/browse/HADOOP-16165
 Project: Hadoop Common
  Issue Type: Wish
  Components: tools
Reporter: t oo


Within a single s3 bucket i have 2 objects:

s3a://bucketabc/a/b/c/object1

s3a://bucketabc/a/b/c/object2

object1 is encrypted with sse-kms (key1)

object2 is encrypted with sse-kms (key2)

The 2 objects are not encrypted with a common key! But they are in the same s3 
bucket.

 

[~ste...@apache.org] - Does the s3a connector support multiple sse-kms keys so 
that it can read the data (ie want to use hive/spark to read from s3) from diff 
objects within same bucket when those objects were encrypted with diff keys? .

[https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/SSE-KMS-enable.html]

 

  fs.s3a.server-side-encryption.key
  
arn:aws:kms:us-west-2:360379543683:key/071a86ff-8881-4ba0-9230-95af6d01ca01




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783633#comment-16783633
 ] 

Gabor Bota commented on HADOOP-15999:
-

It was really me - so running the tests in my ide with the setting:
{noformat}
  
fs.s3a.s3guard.test.implementation
local
  
{noformat}
Running the same test with *dynamo* everything passes. 
Turned out the reason for *NPE*s when using local was we had the issue with the 
reference for the localms again. When we rebuild the fs or build a new fs 
instance we have to set the same cache and the NPEs are gone.

After fixing the NPEs the next issue is 
{{java.util.concurrent.ExecutionException: java.io.FileNotFoundException:}} - 
only for *local* again.
In {{expectExceptionWhenReadingOpenFileAPI}} when the following is called:
{code:java}
  try (FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()) {
  intercept(FileNotFoundException.class, () -> {
byte[] bytes = new byte[text.length()];
return in.read(bytes, 0, bytes.length);
  });
}
{code}
The *{{FSDataInputStream in = guardedFs.openFile(testFilePath).build().get()}}* 
throws *FNFE*, and that's even before it's expected. That means there's 
something wrong going on with open file API is used. I don't have a clue right 
now why would this happen just when using local and not when using dynamo, but 
I need to figure it out.

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv merged pull request #545: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-03-04 Thread GitBox
ajayydv merged pull request #545: HDDS-1183. Override getDelegationToken API 
for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/545
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #545: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-03-04 Thread GitBox
ajayydv commented on issue #545: HDDS-1183. Override getDelegationToken API for 
OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/545#issuecomment-469366756
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-03-04 Thread GitBox
xiaoyuyao commented on issue #526: HDDS-1183. Override getDelegationToken API 
for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#issuecomment-469361880
 
 
   @elek this was reverted because it uses the merge with rebase instead of 
merge with squash. Redo PR with https://github.com/apache/hadoop/pull/545.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r262176403
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,282 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies 
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r262176390
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,282 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies 
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are 
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same 
method.
+   * @param filepath path string in
+   * @return
+   * @throws IOException
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+// 
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+ 

[GitHub] [hadoop] hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #539: HADOOP-16109. Parquet reading 
S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#issuecomment-469353640
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 997 | trunk passed |
   | +1 | compile | 925 | trunk passed |
   | +1 | checkstyle | 185 | trunk passed |
   | +1 | mvnsite | 102 | trunk passed |
   | +1 | shadedclient | 987 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 141 | trunk passed |
   | +1 | javadoc | 83 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | -1 | mvninstall | 26 | hadoop-aws in the patch failed. |
   | -1 | compile | 835 | root in the patch failed. |
   | -1 | javac | 835 | root in the patch failed. |
   | -0 | checkstyle | 179 | root: The patch generated 6 new + 10 unchanged - 0 
fixed = 16 total (was 10) |
   | -1 | mvnsite | 35 | hadoop-aws in the patch failed. |
   | -1 | whitespace | 0 | The patch has 5 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 602 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 25 | hadoop-aws in the patch failed. |
   | +1 | javadoc | 89 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 502 | hadoop-common in the patch failed. |
   | -1 | unit | 36 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 6052 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/539 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux ecfb19d4e7fe 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 15098df |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/testReport/ |
   | Max. process+thread count | 1390 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-539/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r262176398
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,282 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r262176379
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,282 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies 
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are 
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same 
method.
+   * @param filepath path string in
+   * @return
+   * @throws IOException
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+// 
+long pos = in.getPos();
+in.seek(DATASET_LEN -1);
+ 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #539: HADOOP-16109. Parquet 
reading S3AFileSystem causes EOF
URL: https://github.com/apache/hadoop/pull/539#discussion_r262176368
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java
 ##
 @@ -18,31 +18,282 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.AbstractContractSeekTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3AInputPolicy;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
+import static com.google.common.base.Preconditions.checkNotNull;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADVISE;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_NORMAL;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_RANDOM;
+import static org.apache.hadoop.fs.s3a.Constants.INPUT_FADV_SEQUENTIAL;
+import static org.apache.hadoop.fs.s3a.Constants.READAHEAD_RANGE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestConstants.FS_S3A_IMPL_DISABLE_CACHE;
 import static org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 /**
  * S3A contract tests covering file seek.
  */
+@RunWith(Parameterized.class)
 public class ITestS3AContractSeek extends AbstractContractSeekTest {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestS3AContractSeek.class);
+
+  protected static final int READAHEAD = 1024;
+
+  private final String seekPolicy;
+
+  public static final int DATASET_LEN = READAHEAD * 2;
+
+  public static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+
+  /**
+   * Test array for parameterized test runs.
+   * @return a list of parameter tuples.
+   */
+  @Parameterized.Parameters
+  public static Collection params() {
+return Arrays.asList(new Object[][]{
+{INPUT_FADV_RANDOM},
+{INPUT_FADV_NORMAL},
+{INPUT_FADV_SEQUENTIAL},
+});
+  }
+
+  public ITestS3AContractSeek(final String seekPolicy) {
+this.seekPolicy = seekPolicy;
+  }
+
   /**
* Create a configuration, possibly patching in S3Guard options.
+   * The FS is set to be uncached and the readhead and seek policies 
+   * of the bucket itself are removed, so as to guarantee that the
+   * parameterized and test settings are 
* @return a configuration
*/
   @Override
   protected Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 // patch in S3Guard options
 maybeEnableS3Guard(conf);
+// purge any per-bucket overrides.
+try {
+  URI bucketURI = new 
URI(checkNotNull(conf.get("fs.contract.test.fs.s3a")));
+  S3ATestUtils.removeBucketOverrides(bucketURI.getHost(), conf,
+  READAHEAD_RANGE,
+  INPUT_FADVISE);
+} catch (URISyntaxException e) {
+  throw new RuntimeException(e);
+}
+// the FS is uncached, so will need clearing in test teardowns.
+S3ATestUtils.disableFilesystemCaching(conf);
+conf.setInt(READAHEAD_RANGE, READAHEAD);
+conf.set(INPUT_FADVISE, seekPolicy);
+conf.set(INPUT_FADVISE, seekPolicy);
 return conf;
   }
 
   @Override
   protected AbstractFSContract createContract(Configuration conf) {
 return new S3AContract(conf);
   }
+
+  @Override
+  public void teardown() throws Exception {
+S3AFileSystem fs = getFileSystem();
+if (fs.getConf().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)) {
+  fs.close();
+}
+super.teardown();
+  }
+
+  /**
+   * This subclass of the {@code path(path)} operation adds the seek policy
+   * to the end to guarantee uniqueness across different calls of the same 
method.
+   * @param filepath path string in
+   * @return
+   * @throws IOException
+   */
+  @Override
+  protected Path path(final String filepath) throws IOException {
+return super.path(filepath + "-" + seekPolicy);
+  }
+
+  /**
+   * Go to end, read then seek back to the previous position to force normal
+   * seek policy to switch to random IO.
+   * This will call readByte to trigger the second GET
+   * @param in input stream
+   * @return the byte read
+   * @throws IOException failure.
+   */
+  private byte readAtEndAndReturn(final FSDataInputStream in)
+  throws IOException {
+// 
 
 Review comment:
   whitespace:end of line
   


[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-03-04 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783600#comment-16783600
 ] 

He Xiaoqiao commented on HADOOP-16161:
--

[~elgoiri], I think it is not related to depth of the topology, since 
#getWeightUsingNetworkLocation does not calculate with the leaf of topology 
only. FYI.

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783591#comment-16783591
 ] 

Hadoop QA commented on HADOOP-16161:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961003/HADOOP-16161.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 243190de9faa 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 15098df |

[GitHub] [hadoop] hadoop-yetus commented on issue #551: HADOOP-16162 Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #551: HADOOP-16162 Remove unused Job Summary 
Appender configurations from log4j.properties
URL: https://github.com/apache/hadoop/pull/551#issuecomment-469343933
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1049 | trunk passed |
   | +1 | mvnsite | 72 | trunk passed |
   | +1 | shadedclient | 1775 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 55 | the patch passed |
   | +1 | mvnsite | 66 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 55 | hadoop-common in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 2829 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-551/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/551 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  |
   | uname | Linux 48e0b81df316 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 15098df |
   | maven | version: Apache Maven 3.3.9 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-551/1/testReport/ |
   | Max. process+thread count | 421 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-551/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16161) NetworkTopology#getWeightUsingNetworkLocation return unexpected result

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783582#comment-16783582
 ] 

Íñigo Goiri commented on HADOOP-16161:
--

There might be topologies deeper than {{/d1/r2}}.
Can we avoid having a +2 and use the actual depth?

> NetworkTopology#getWeightUsingNetworkLocation return unexpected result
> --
>
> Key: HADOOP-16161
> URL: https://issues.apache.org/jira/browse/HADOOP-16161
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-16161.001.patch
>
>
> Consider the following scenario:
> 1. there are 4 slaves and topology like:
> Rack: /IDC/RACK1
>hostname1
>hostname2
> Rack: /IDC/RACK2
>hostname3
>hostname4
> 2. Reader from hostname1, and calculate weight between reader and [hostname1, 
> hostname3, hostname4] by #getWeight, and their corresponding values are 
> [0,4,4]
> 3. Reader from client which is not in the topology, and in the same IDC but 
> in none rack of the topology, and calculate weight between reader and 
> [hostname1, hostname3, hostname4] by #getWeightUsingNetworkLocation, and 
> their corresponding values are [2,2,2]
> 4. Other different Reader can get the similar results.
> The weight result for case #3 is obviously not the expected value, the truth 
> is [4,4,4]. this issue may cause reader not really following arrange: local 
> -> local rack -> remote rack. 
> After dig the detailed implement, the root cause is 
> #getWeightUsingNetworkLocation only calculate distance between Racks rather 
> than hosts.
> I think we should add constant 2 to correct the weight of 
> #getWeightUsingNetworkLocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-03-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783553#comment-16783553
 ] 

Hadoop QA commented on HADOOP-15958:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}184m  5s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}318m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960984/HADOOP-15958-004.patch
 |
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  compile  javac  
javadoc  mvninstall  mvnsite  unit  shadedclient  xml  |
| uname | Linux 00a955656703 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 15098df |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| unit | 

[jira] [Commented] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-03-04 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783541#comment-16783541
 ] 

Steve Loughran commented on HADOOP-16109:
-

latest PR update adds the tests & makes sure that each parameterized run is 
switching seek policies

> Parquet reading S3AFileSystem causes EOF
> 
>
> Key: HADOOP-16109
> URL: https://issues.apache.org/jira/browse/HADOOP-16109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2
>Reporter: Dave Christianson
>Assignee: Steve Loughran
>Priority: Blocker
>
> When using S3AFileSystem to read Parquet files a specific set of 
> circumstances causes an  EOFException that is not thrown when reading the 
> same file from local disk
> Note this has only been observed under specific circumstances:
>   - when the reader is doing a projection (will cause it to do a seek 
> backwards and put the filesystem into random mode)
>  - when the file is larger than the readahead buffer size
>  - when the seek behavior of the Parquet reader causes the reader to seek 
> towards the end of the current input stream without reopening, such that the 
> next read on the currently open stream will read past the end of the 
> currently open stream.
> Exception from Parquet reader is as follows:
> {code}
> Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
> to read
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
>  at 
> org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
>  at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
>  at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
>  at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
>  at 
> org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
>  at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> The following example program generate the same root behavior (sans finding a 
> Parquet file that happens to trigger this condition) by purposely reading 
> past the already active readahead range on any file >= 1029 bytes in size.. 
> {code:java}
> final Configuration conf = new Configuration();
> conf.set("fs.s3a.readahead.range", "1K");
> conf.set("fs.s3a.experimental.input.fadvise", "random");
> final FileSystem fs = FileSystem.get(path.toUri(), conf);
> // forward seek reading across readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
> final byte[] temp = new byte[5];
> in.readByte();
> in.readFully(1023, temp); // <-- works
> }
> // forward seek reading from end of readahead boundary
> try (FSDataInputStream in = fs.open(path)) {
>  final byte[] temp = new byte[5];
>  in.readByte();
>  in.readFully(1024, temp); // <-- throws EOFException
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-03-04 Thread GitBox
bharatviswa504 commented on issue #523: HDDS-623. On SCM UI, Node Manager info 
is empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-469327716
 
 
   yes, we still need nodemetrics.
   +1 LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r262147191
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -172,8 +164,7 @@ Test list parts
 
 #upload parts
${system} = Evaluateplatform.system()platform
-   Run Keyword if  '${system}' == 'Darwin'  Create Random file for mac
-   Run Keyword if  '${system}' == 'Linux'   Create Random file for 
linux
+   Run Keyword Create Random file  5
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r262147227
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -75,8 +71,7 @@ Test Multipart Upload Complete
 
 #upload parts
${system} = Evaluateplatform.system()platform
-   Run Keyword if  '${system}' == 'Darwin'  Create Random file for mac
-   Run Keyword if  '${system}' == 'Linux'   Create Random file for 
linux
+   Run Keyword Create Random file  5
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r262147210
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -55,13 +53,11 @@ Test Multipart Upload
 # multipart upload, uploading each part as 5MB file, exception is for last part
 
${system} = Evaluateplatform.system()platform
-   Run Keyword if  '${system}' == 'Darwin'  Create Random file for mac
-   Run Keyword if  '${system}' == 'Linux'   Create Random file for 
linux
+   Run Keyword Create Random file  5
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r262147199
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -205,3 +196,10 @@ Test list parts
 
 #finally abort it
   ${result} = Execute AWSS3APICli and checkrc
abort-multipart-upload --bucket ${BUCKET} --key multipartKey5 --upload-id 
${uploadID}0
+
+Test Multipart Upload with the simplified aws s3 cp API
+   Create Random file  22
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on issue #549: HDDS-1213. Support plain text S3 MPU 
initialization request
URL: https://github.com/apache/hadoop/pull/549#issuecomment-469325868
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1206 | trunk passed |
   | -1 | compile | 117 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 28 | trunk passed |
   | -1 | mvnsite | 30 | dist in trunk failed. |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 41 | trunk passed |
   | +1 | javadoc | 39 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | -1 | compile | 100 | hadoop-ozone in the patch failed. |
   | -1 | javac | 100 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 24 | the patch passed |
   | -1 | mvnsite | 22 | dist in the patch failed. |
   | -1 | whitespace | 0 | The patch 5  line(s) with tabs. |
   | +1 | shadedclient | 860 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 51 | the patch passed |
   | +1 | javadoc | 39 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 40 | s3gateway in the patch passed. |
   | -1 | unit | 22 | dist in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3808 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/549 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a01c85c47fa9 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 15098df |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/branch-mvnsite-hadoop-ozone_dist.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/patch-mvnsite-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/artifact/out/patch-unit-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-549/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support plain text S3 MPU initialization request

2019-03-04 Thread GitBox
hadoop-yetus commented on a change in pull request #549: HDDS-1213. Support 
plain text S3 MPU initialization request
URL: https://github.com/apache/hadoop/pull/549#discussion_r262147219
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -55,13 +53,11 @@ Test Multipart Upload
 # multipart upload, uploading each part as 5MB file, exception is for last part
 
${system} = Evaluateplatform.system()platform
-   Run Keyword if  '${system}' == 'Darwin'  Create Random file for mac
-   Run Keyword if  '${system}' == 'Linux'   Create Random file for 
linux
+   Run Keyword Create Random file  5
${result} = Execute AWSS3APICli upload-part --bucket 
${BUCKET} --key multipartKey --part-number 1 --body /tmp/part1 --upload-id 
${nextUploadID}
Should contain  ${result}ETag
 # override part
-   Run Keyword if  '${system}' == 'Darwin'Create Random file for 
mac
-   Run Keyword if  '${system}' == 'Linux' Create Random file for 
linux
+   Run Keyword Create Random file  5
 
 Review comment:
   whitespace:tabs in line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16162) Remove unused Job Summary Appender configurations from log4j.properties

2019-03-04 Thread Chen Zhi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhi updated HADOOP-16162:
--
Labels: CI pull-request-available  (was: CI)

> Remove unused Job Summary Appender configurations from log4j.properties
> ---
>
> Key: HADOOP-16162
> URL: https://issues.apache.org/jira/browse/HADOOP-16162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.3-alpha
>Reporter: Chen Zhi
>Priority: Major
>  Labels: CI, pull-request-available
> Attachments: HADOOP-16162.1.patch
>
>
> The Job Summary Appender (JSA) is introduced in 
> [MAPREDUCE-740|https://issues.apache.org/jira/browse/MAPREDUCE-740] to 
> provide the summary information of the job's runtime. And this appender is 
> only referenced by the logger defined in 
> org.apache.hadoop.mapred.JobInProgress$JobSummary. However, this class has 
> been removed in 
> [MAPREDUCE-4266|https://issues.apache.org/jira/browse/MAPREDUCE-4266] 
> together with other M/R1 files. This appender is no longer used and I guess 
> we can remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >