[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766754#comment-16766754
 ] 

Vinayakumar B commented on HADOOP-16108:


+1 for latest change.

Will push later today. Seems gitbox is down at the moment.

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16108-01.patch, HDFS-14255-01.patch, 
> HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766747#comment-16766747
 ] 

Hudson commented on HADOOP-16097:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15943 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15943/])
HADOOP-16097. Provide proper documentation for FairCallQueue. (yqlin: rev 
7b11b404a35f93e4b4b12546034ef8001720eb5f)
* (add) 
hadoop-common-project/hadoop-common/src/site/resources/images/faircallqueue-overview.png
* (edit) hadoop-project/src/site/site.xml
* (add) hadoop-common-project/hadoop-common/src/site/markdown/FairCallQueue.md


> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-16097:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk.

Thanks [~xkrogen] for completing the document of the Fair Call Queue. It will 
help users a lot, :).

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766726#comment-16766726
 ] 

Yiqun Lin commented on HADOOP-16097:


LGTM, +1. Commit this shortly.

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766639#comment-16766639
 ] 

Hadoop QA commented on HADOOP-16068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 5 
new + 9 unchanged - 0 fixed = 14 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 63 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958471/HADOOP-16068-005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 635c261cd804 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3dc2523 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15918/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15918/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-16106) hadoop-aws project javadoc does not compile

2019-02-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766611#comment-16766611
 ] 

Eric Yang commented on HADOOP-16106:


[~ste...@apache.org] I am using openjdk version "1.8.0_151".  The build machine 
is using 1.8.0_191.  Not sure if newer JDK made some improvement to javadoc to 
mask the issue.

> hadoop-aws project javadoc does not compile
> ---
>
> Key: HADOOP-16106
> URL: https://issues.apache.org/jira/browse/HADOOP-16106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Reporter: Eric Yang
>Assignee: Steve Loughran
>Priority: Trivial
>
> Apache Hadoop Amazon Web Services support maven javadoc doesn't build 
> properly because two non-html friendly characters in javadoc comments.
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java:31:
>  error: bad HTML entity
> [ERROR]  * Please don't refer to these outside of this module & its tests.
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:115:
>  error: bad use of '>'
> [ERROR]* @return a value >= 0
> [ERROR]  ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766604#comment-16766604
 ] 

Steve Loughran commented on HADOOP-16068:
-

[~DanielZhou]: thanks, fixed that

Patch 005
* An ExtensionHelper class removes the repetition of probes for type, casting 
and invocation. Note use of Optional & how Java's checked exceptions cripples 
its full exploitation.
* Tests for the custom OAuth provider chain
* tests to verify the unbound DT provider works.
* fix javadocs & style from patch 005
* TokenAccessProviderException passes cause parameter to superclass
* Custom/mock Oauth token provider with unit test of construction & init.

For that TokenAccessProviderException to stop losing its cause I had to add a 
new constructor to AzureBlobFileSystemException of type (String, Throwable). 
I've kept the existing (String, Exception), so that anything compiled against 
the old binaries will still link. I could change how 
TokenAccessProviderException does the cause setting (via initCause) if you'd 
prefer -it won't change the signature of either exception.

Tested: Azure amsterdam. I've just changed the javadocs & style after doing 
that, so I may have just broken something.


> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Attachment: HADOOP-16068-005.patch

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Open  (was: Patch Available)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766523#comment-16766523
 ] 

Hadoop QA commented on HADOOP-16107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 46s{color} 
| {color:red} root generated 1 new + 1495 unchanged - 0 fixed = 1496 total (was 
1495) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 32s{color} | {color:orange} root: The patch generated 1 new + 217 unchanged 
- 0 fixed = 218 total (was 217) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 52s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
1s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958432/HADOOP-16107-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae84a677b782 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default 

[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766405#comment-16766405
 ] 

Hadoop QA commented on HADOOP-16108:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 1 unchanged - 2 fixed = 1 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958434/HADOOP-16108-01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1be9d4fd96ac 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15917/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15917/console |
| Powered by | Apache 

[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option

2019-02-12 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-15281:
--
Description: 
Currently Distcp uploads a file by two strategies

# append parts
# copy to temp then rename


option 2 executes the following sequence in {{promoteTmpToTarget}}
{code}
if ((fs.exists(target) && !fs.delete(target, false))
|| (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
|| !fs.rename(tmpTarget, target)) {
  throw new IOException("Failed to promote tmp-file:" + tmpTarget
  + " to: " + target);
}
{code}

For any object store, that's a lot of HTTP requests; for S3A you are looking at 
12+ requests and an O(data) copy call. 

This is not a good upload strategy for any store which manifests its output 
atomically at the end of the write().

Proposed: add a switch to write directly to the dest path, which can be 
supplied as either a conf option (distcp.direct.write = true) or a CLI option 
(-direct).






  was:
Currently Distcp uploads a file by two strategies

# append parts
# copy to temp then rename


option 2 executes the following sequence in {{promoteTmpToTarget}}
{code}
if ((fs.exists(target) && !fs.delete(target, false))
|| (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
|| !fs.rename(tmpTarget, target)) {
  throw new IOException("Failed to promote tmp-file:" + tmpTarget
  + " to: " + target);
}
{code}

For any object store, that's a lot of HTTP requests; for S3A you are looking at 
12+ requests and an O(data) copy call. 

This is not a good upload strategy for any store which manifests its output 
atomically at the end of the write().

Proposed: add a switch to write directly to the dest path, which can be 
supplied as either a conf option (distcp.direct.write) or a CLI option 
(-direct).







> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Olson
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, 
> HADOOP-15281-003.patch, HADOOP-15281-004.patch
>
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write directly to the dest path, which can be 
> supplied as either a conf option (distcp.direct.write = true) or a CLI option 
> (-direct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option

2019-02-12 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-15281:
--
Fix Version/s: 3.1.3

> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Olson
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, 
> HADOOP-15281-003.patch, HADOOP-15281-004.patch
>
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write directly to the dest path, which can be 
> supplied as either a conf option (distcp.direct.write) or a CLI option 
> (-direct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10007) distcp / mv is not working on ftp

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10007.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

[~noslowerdna] I think you are right. Closing as fixed

> distcp / mv is not working on ftp
> -
>
> Key: HADOOP-10007
> URL: https://issues.apache.org/jira/browse/HADOOP-10007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Ubuntu 12.04.2 LTS
> Hadoop 2.0.0-cdh4.2.1
> Subversion 
> file:///var/lib/jenkins/workspace/generic-package-ubuntu64-12-04/CDH4.2.1-Packaging-Hadoop-2013-04-22_09-50-19/hadoop-2.0.0+960-1.cdh4.2.1.p0.9~precise/src/hadoop-common-project/hadoop-common
>  -r 144bd548d481c2774fab2bec2ac2645d190f705b
> Compiled by jenkins on Mon Apr 22 10:26:30 PDT 2013
> From source with checksum aef88defdddfb22327a107fbd7063395
>Reporter: Fabian Zimmermann
>Priority: Major
> Fix For: 3.3.0
>
>
> i'm just trying to backup some files to our ftp-server.
> hadoop distcp hdfs:///data/ ftp://user:pass@server/data/
> returns after some minutes with:
> Task TASKID="task_201308231529_97700_m_02" TASK_TYPE="MAP" 
> TASK_STATUS="FAILED" FINISH_TIME="1380217916479" 
> ERROR="java\.io\.IOException: Cannot rename parent(source): 
> ftp://x:x@backup2/data/, parent(destination):  ftp://x:x@backup2/data/
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:557)
>   at 
> org\.apache\.hadoop\.fs\.ftp\.FTPFileSystem\.rename(FTPFileSystem\.java:522)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:154)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.moveTaskOutputs(FileOutputCommitter\.java:172)
>   at 
> org\.apache\.hadoop\.mapred\.FileOutputCommitter\.commitTask(FileOutputCommitter\.java:132)
>   at 
> org\.apache\.hadoop\.mapred\.OutputCommitter\.commitTask(OutputCommitter\.java:221)
>   at org\.apache\.hadoop\.mapred\.Task\.commit(Task\.java:1000)
>   at org\.apache\.hadoop\.mapred\.Task\.done(Task\.java:870)
>   at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:329)
>   at org\.apache\.hadoop\.mapred\.Child$4\.run" TASK_ATTEMPT_ID="" .
> I googled a bit and added
> fs.ftp.host = backup2
> fs.ftp.user.backup2 = user
> fs.ftp.password.backup2 = password
> to core-site.xml, then I was able to execute:
> hadoop fs -ls ftp:///data/
> hadoop fs -rm ftp:///data/test.file
> but as soon as I try
> hadoop fs -mv file:///data/test.file ftp:///data/test2.file
> mv: `ftp:///data/test.file': Input/output error
> I enabled debug-logging in our ftp-server and got:
> Sep 27 15:24:33 backup2 ftpd[38241]: command: LIST /data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 150
> Sep 27 15:24:33 backup2 ftpd[38241]: Opening BINARY mode data connection for 
> '/bin/ls'.
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 226
> Sep 27 15:24:33 backup2 ftpd[38241]: Transfer complete.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: CWD ftp:/data
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> Sep 27 15:24:33 backup2 ftpd[38241]: ftp:/data: No such file or directory.
> Sep 27 15:24:33 backup2 ftpd[38241]: command: RNFR test.file
> Sep 27 15:24:33 backup2 ftpd[38241]: <--- 550
> looks like the generation of "CWD" is buggy, hadoop tries to cd into 
> "ftp:/data", but should use "/data"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option

2019-02-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766363#comment-16766363
 ] 

Steve Loughran commented on HADOOP-15281:
-

It's in 3.1 via HADOOP-16096. 

I'm actually going to leave it at that point for branch-3, but pull it into 
branch-2, at least as far as a PoC patch. Why so? I have a repackaged version 
of distcp for branch-2 designed to upload to object stores faster by having the 
relevant changes (improved delete, for example). 

Ultimately, distcp needs replacement. For now, we can tweak the details, 
carefully

 

> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Olson
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, 
> HADOOP-15281-003.patch, HADOOP-15281-004.patch
>
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write directly to the dest path, which can be 
> supplied as either a conf option (distcp.direct.write) or a CLI option 
> (-direct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766358#comment-16766358
 ] 

Masatake Iwasaki commented on HADOOP-16098:
---

The log of HADOOP-15229 QA shows that "{{mvn javadoc:javadoc}}" has been 
executed with the patch applied. It looks like a issue in verification of the 
result.
https://builds.apache.org/job/PreCommit-HADOOP-Build/15879/console
{code}
cd /testptch/hadoop/hadoop-tools/hadoop-aws
/usr/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/yetus-m2/hadoop-trunk-patch-0
 -Ptest-patch -Pdocs -DskipTests clean javadoc:javadoc -DskipTests=true > 
/testptch/patchprocess/patch-javadoc-hadoop-tools_hadoop-aws.txt 2>&1
Elapsed:   0m 30s
{code}

"{{mvn -Ptest-patch -Pdocs -DskipTests clean javadoc:javadoc 
-DskipTests=true}}" fails if there are javadoc warnings on my local.

> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766335#comment-16766335
 ] 

Ayush Saxena commented on HADOOP-16108:
---

Thanx [~vinayrpet] for the review!!!

Made changes as suggested.

Pls Review :) 

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16108-01.patch, HDFS-14255-01.patch, 
> HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766331#comment-16766331
 ] 

Hadoop QA commented on HADOOP-16108:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 1 unchanged - 2 fixed = 1 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
32s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957524/HDFS-14255-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 50d704dd80b0 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7806403 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15915/testReport/ |
| Max. process+thread count | 1718 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15915/console |
| Powered by | 

[jira] [Updated] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16108:
--
Attachment: HADOOP-16108-01.patch

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16108-01.patch, HDFS-14255-01.patch, 
> HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: (was: HADOOP-16107-001.patch)

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16106) hadoop-aws project javadoc does not compile

2019-02-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766299#comment-16766299
 ] 

Steve Loughran commented on HADOOP-16106:
-

sorry, thanks. Eric: what java version are you using? I've been relying on 
yetus to check the javadocs, but if its letting things slip through, then I 
have to stop doing that (and/or we get yetus to be stricter here)

> hadoop-aws project javadoc does not compile
> ---
>
> Key: HADOOP-16106
> URL: https://issues.apache.org/jira/browse/HADOOP-16106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Reporter: Eric Yang
>Assignee: Steve Loughran
>Priority: Trivial
>
> Apache Hadoop Amazon Web Services support maven javadoc doesn't build 
> properly because two non-html friendly characters in javadoc comments.
> {code}
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/InternalConstants.java:31:
>  error: bad HTML entity
> [ERROR]  * Please don't refer to these outside of this module & its tests.
> [ERROR]   ^
> [ERROR] 
> /home/eyang/test/hadoop/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:115:
>  error: bad use of '>'
> [ERROR]* @return a value >= 0
> [ERROR]  ^
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-001.patch

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-001.patch

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Open  (was: Patch Available)

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

patch 001: 
* TestFutureIO to see what thread the openFile().build call runs on (as initial 
hypothesis was "if these open() in a different thread, they may not get 
counted".
* add protected method in filesystem to allow subclasses to get the base 
builders for input and output
* overwrite createFile/openFile builder calls in ChecksumFileSystem; test to 
verify that CRCs are being written/read (Based on byte count alone)
* identify and override all other create/createNonRecursivce calls which were 
going to direct to inner FS, hence not creating CRCs
* add tests in TestLocalFileSystem to verify that all 
create/createfile/open/openfile calls create/read checksums
 o
+also adds a newline to TestJobCounters to ensure it gets tested too. Final 
patch commit must omit this.

Test is tagged as blocker as it is significant; it will need partial 
backporting of the extra ChecksumFileSystem 
create/createNonrecursive/createFile methods and tests to match

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] milleruntime closed pull request #473: HADOOP-11223. Create UnmodifiableConfiguration

2019-02-12 Thread GitBox
milleruntime closed pull request #473: HADOOP-11223. Create 
UnmodifiableConfiguration
URL: https://github.com/apache/hadoop/pull/473
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766284#comment-16766284
 ] 

Vinayakumar B commented on HADOOP-16108:


[~ayushtkn]. After re-checking the patch.. found that following change is not 
correct.

{code:java}
   @Override
   protected void processOptions(LinkedList args) throws IOException {
-CommandFormat cf = new CommandFormat(1, 1, "f");
+CommandFormat cf = new CommandFormat(0, 3, "f");
{code}

{{min}} and {{max}} args for tail command is 1. i.e. Number of args after 
parsing the options.
 Fix the test also accordingly.

+1, pending above changes.

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14255-01.patch, HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Vinayakumar B (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766255#comment-16766255
 ] 

Vinayakumar B commented on HADOOP-16108:


Moved to HADOOP, since changes are only in Hadoop Common.

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14255-01.patch, HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-16108) Tail Follow Interval Should Allow To Specify The Sleep Interval To Save Unnecessary RPC's

2019-02-12 Thread Vinayakumar B (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B moved HDFS-14255 to HADOOP-16108:
---

Key: HADOOP-16108  (was: HDFS-14255)
Project: Hadoop Common  (was: Hadoop HDFS)

> Tail Follow Interval Should Allow To Specify The Sleep Interval To Save 
> Unnecessary RPC's 
> --
>
> Key: HADOOP-16108
> URL: https://issues.apache.org/jira/browse/HADOOP-16108
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14255-01.patch, HDFS-14255-02.patch
>
>
> As of now tail -f follows every 5 seconds. We should allow a parameter to 
> specify this sleep interval. Linux has this configurable as in form of -s 
> parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Summary: LocalFileSystem doesn't wrap all create() or new builder calls; 
may skip CRC logic  (was: LocalFileSystem doesn't all create() or new builder 
calls)

> LocalFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> --
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766252#comment-16766252
 ] 

Hadoop QA commented on HADOOP-16097:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16097 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958416/HADOOP-16097.002.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  xml  |
| uname | Linux e7620d1d29de 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 20b92cd |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-project hadoop-common-project/hadoop-common U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15914/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16107) LocalFileSystem doesn't all create() or new builder calls

2019-02-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16107:
---

 Summary: LocalFileSystem doesn't all create() or new builder calls
 Key: HADOOP-16107
 URL: https://issues.apache.org/jira/browse/HADOOP-16107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.3, 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


LocalFS is a subclass of filterFS, but overrides create and open so that 
checksums are created and read. 

MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
forwarded to the innerFS without CRC checking. Reviewing/fixing that has shown 
that some of the create methods aren't being correctly wrapped, so not 
generating CRCs

* createFile() builder

The following create calls

{code}
  public FSDataOutputStream createNonRecursive(final Path f,
  final FsPermission permission,
  final EnumSet flags,
  final int bufferSize,
  final short replication,
  final long blockSize,
  final Progressable progress) throws IOException;

  public FSDataOutputStream create(final Path f,
  final FsPermission permission,
  final EnumSet flags,
  final int bufferSize,
  final short replication,
  final long blockSize,
  final Progressable progress,
  final Options.ChecksumOpt checksumOpt) throws IOException {
return super.create(f, permission, flags, bufferSize, replication,
blockSize, progress, checksumOpt);
  }
{code}

This means that applications using these methods, directly or indirectly to 
create files aren't actually generating checksums.

Fix: implement these methods & relay to local create calls, not to the inner FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16091) Create hadoop/ozone docker images with inline build process

2019-02-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766217#comment-16766217
 ] 

Eric Yang commented on HADOOP-16091:


{quote}For me the file based activation is not enough. I wouldn't like to build 
a new docker image with each build. I think it should be activated with 
explicit profile declaration.{quote}

Profile with explicit activation works, or don't start docker and it will skip 
all docker image build.

{quote}With this approach for the docker based builds (eg. release builds, 
jenkins builds) we need docker-in-docker base image or we need to map the 
docker.sock from outside to inside.{quote}

My preference is mapping docker.sock from host.

{quote} My questions are still open: I think the we need a method to 
upgrade/modify/create images for existing releases, especially:
adding security fixes to existing, released images{quote}

Older version number can not be a moving target.  I think security fixes should 
get a new version number to prevent collusion of giving un-patched image and 
claiming that it is patched.  Dockerfile must use yum or apt-get command to 
fetch explicit version of external dependencies.  This ensures Dockerfile is 
git version controlled and old build is reproducible until external repository 
stop carrying the old binaries.  Maven based release process just works when 
yum commands in Dockerfile are versioned.

{quote}creating new images for older releases{quote}

If this profile is programmed into hadoop-dist/pom.xml, I don't see any problem 
to release a 2.7.8 version that is backward compatible with 2.7.7.

{quote}I think the containers are more reproducible if they are based on 
released tar files. {quote}

Use artifactItem tag to pick up the release tarball and include it as part of 
the docker build.  This is shown in the example code above.  I think this 
satisfies your statement, or am I missing something?

> Create hadoop/ozone docker images with inline build process
> ---
>
> Key: HADOOP-16091
> URL: https://issues.apache.org/jira/browse/HADOOP-16091
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> {quote}1, 3. There are 38 Apache projects hosting docker images on Docker hub 
> using Apache Organization. By browsing Apache github mirror. There are only 7 
> projects using a separate repository for docker image build. Popular projects 
> official images are not from Apache organization, such as zookeeper, tomcat, 
> httpd. We may not disrupt what other Apache projects are doing, but it looks 
> like inline build process is widely employed by majority of projects such as 
> Nifi, Brooklyn, thrift, karaf, syncope and others. The situation seems a bit 
> chaotic for Apache as a whole. However, Hadoop community can decide what is 
> best for Hadoop. My preference is to remove ozone from source tree naming, if 
> Ozone is intended to be subproject of Hadoop for long period of time. This 
> enables Hadoop community to host docker images for various subproject without 
> having to check out several source tree to trigger a grand build. However, 
> inline build process seems more popular than separated process. Hence, I 
> highly recommend making docker build inline if possible.
> {quote}
> The main challenges are also discussed in the thread:
> {code:java}
> 3. Technically it would be possible to add the Dockerfile to the source
> tree and publish the docker image together with the release by the
> release manager but it's also problematic:
> {code}
> a) there is no easy way to stage the images for the vote
>  c) it couldn't be flagged as automated on dockerhub
>  d) It couldn't support the critical updates.
>  * Updating existing images (for example in case of an ssl bug, rebuild
>  all the existing images with exactly the same payload but updated base
>  image/os environment)
>  * Creating image for older releases (We would like to provide images,
>  for hadoop 2.6/2.7/2.7/2.8/2.9. Especially for doing automatic testing
>  with different versions).
> {code:java}
>  {code}
> The a) can be solved (as [~eyang] suggested) with using a personal docker 
> image during the vote and publish it to the dockerhub after the vote (in case 
> the permission can be set by the INFRA)
> Note: based on LEGAL-270 and linked discussion both approaches (inline build 
> process / external build process) are compatible with the apache release.
> Note: HDDS-851 and HADOOP-14898 contains more information about these 
> problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16097:
-
Attachment: HADOOP-16097.002.patch

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766201#comment-16766201
 ] 

Erik Krogen commented on HADOOP-16097:
--

Thanks for the tip and good catch [~linyiqun]! I have uploaded a v002 patch 
fixing the image issue, and a rendered version.

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16097) Provide proper documentation for FairCallQueue

2019-02-12 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-16097:
-
Attachment: FairCallQueueGuide_Rendered.pdf

> Provide proper documentation for FairCallQueue
> --
>
> Key: HADOOP-16097
> URL: https://issues.apache.org/jira/browse/HADOOP-16097
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ipc
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: FairCallQueueGuide_Rendered.pdf, HADOOP-16097.000.patch, 
> HADOOP-16097.001.patch, HADOOP-16097.002.patch, faircallqueue-overview.png
>
>
> FairCallQueue, added in HADOOP-10282, doesn't seem to be well-documented 
> anywhere. Let's add in a new documentation for it and related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option

2019-02-12 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766158#comment-16766158
 ] 

Andrew Olson commented on HADOOP-15281:
---

Updated fix versions. If there is a compelling reason to patch this into 3.1 I 
can make the necessary logging changes.

> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Olson
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, 
> HADOOP-15281-003.patch, HADOOP-15281-004.patch
>
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write directly to the dest path, which can be 
> supplied as either a conf option (distcp.direct.write) or a CLI option 
> (-direct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option

2019-02-12 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-15281:
--
Fix Version/s: (was: 3.1.3)
   3.3.0

> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Andrew Olson
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, 
> HADOOP-15281-003.patch, HADOOP-15281-004.patch
>
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write directly to the dest path, which can be 
> supplied as either a conf option (distcp.direct.write) or a CLI option 
> (-direct).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16098) Fix javadoc warnings in hadoop-aws

2019-02-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16766053#comment-16766053
 ] 

Steve Loughran commented on HADOOP-16098:
-

thanks for this, and sorry. I thought I'd gone through all the javadoc 
complaints in Yetus. Is it only picking up a subset of the issues?

> Fix javadoc warnings in hadoop-aws
> --
>
> Key: HADOOP-16098
> URL: https://issues.apache.org/jira/browse/HADOOP-16098
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16098.001.patch
>
>
> mvn package -Pdist fails due to javadoc warnings in hadoop-aws.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15364) Add support for S3 Select to S3A

2019-02-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15364.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

Closing as duplicate; it's in, though not through the MR pipeline. 
Contributions there encouraged.

> Add support for S3 Select to S3A
> 
>
> Key: HADOOP-15364
> URL: https://issues.apache.org/jira/browse/HADOOP-15364
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15364-001.patch, HADOOP-15364-002.patch, 
> HADOOP-15364-004.patch
>
>
> Expect a PoC patch for this in a couple of days; 
> * it'll depend on an SDK update to work, plus a couple of of other minor 
> changes
> * Adds command line option too 
> {code}
> hadoop s3guard select -header use -compression gzip -limit 100 
> s3a://landsat-pds/scene_list.gz" \
> "SELECT s.entityId FROM S3OBJECT s WHERE s.cloudCover = '0.0' "
> {code}
> For wider use we'll need to implement the HADOOP-15229 so that callers can 
> pass down the expression along with any other parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org