[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-25 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777665#comment-16777665
 ] 

t oo commented on HADOOP-16055:
---

bump

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 edited a comment on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-25 Thread GitBox
bharatviswa504 edited a comment on issue #518: HDDS-1178. Healthy pipeline 
Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467311361
 
 
   Thank You @anuengineer  for the review.
   1. Low 10% is, as this rule main purpose is once we are out of chill mode, 
we have atleast few pipelines for writes to succeed. (As other rules like 
container chill mode rule, pipeline rule with at least one datanode reported by 
the time these completed, we might have more pipelines, this rule is more like 
a conservative side.) Let me know if you want to change it to any other default 
value or any other suggestion for the default value.
   2. Thanks for catching it. done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-25 Thread GitBox
bharatviswa504 commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467311361
 
 
   Thank You @anuengineer  for the review.
   1. Low 10% is, as this rule main purpose is once we are out of chill mode, 
we have atleast few pipelines for writes to succeed. (As other rules like 
container chill mode rule, pipeline rule with at least one datanode reported by 
the time these completed, we might have more pipelines, this rule is more like 
a conservative side.) Let me know if you want to change it to any other default 
value.
   2. Thanks for catching it. done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2019-02-25 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777597#comment-16777597
 ] 

Ajay Kumar commented on HADOOP-15889:
-

+1, will give it few days, if anyone else has any comments on this.

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch, 
> HADOOP-15889.002.patch, HADOOP-15889.003.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-25 Thread GitBox
anuengineer commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467277972
 
 
   A couple of comments:
   1. Why is it 10% isn't that too low?
   2. I see that for each pipeline report arrival, we check the pipeline 
manager for the state -- to check if the pipeline is in healthy. Isn't there a 
race condition here?  How do we guarantee that this check and the pipeline 
report update does not race each other? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-25 Thread GitBox
hadoop-yetus commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode 
Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467265319
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1008 | trunk passed |
   | +1 | compile | 75 | trunk passed |
   | -0 | checkstyle | 26 | The patch fails to run checkstyle in hadoop-hdds |
   | -1 | mvnsite | 20 | server-scm in trunk failed. |
   | +1 | shadedclient | 743 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 18 | server-scm in trunk failed. |
   | -1 | javadoc | 18 | server-scm in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | -1 | mvninstall | 12 | server-scm in the patch failed. |
   | +1 | compile | 68 | the patch passed |
   | +1 | javac | 68 | the patch passed |
   | -0 | checkstyle | 18 | The patch fails to run checkstyle in hadoop-hdds |
   | -1 | mvnsite | 13 | server-scm in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 16 | server-scm in the patch failed. |
   | -1 | javadoc | 16 | server-scm in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 73 | common in the patch failed. |
   | -1 | unit | 16 | server-scm in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3366 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/518 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 970154b15eab 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a6ab371 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out//testptch/patchprocess/maven-branch-checkstyle-hadoop-hdds.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/branch-mvnsite-hadoop-hdds_server-scm.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/branch-javadoc-hadoop-hdds_server-scm.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-mvninstall-hadoop-hdds_server-scm.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out//testptch/patchprocess/maven-patch-checkstyle-hadoop-hdds.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-mvnsite-hadoop-hdds_server-scm.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-findbugs-hadoop-hdds_server-scm.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-javadoc-hadoop-hdds_server-scm.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/testReport/ |
   | Max. process+thread count | 436 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the 

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777468#comment-16777468
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 45 
new + 27 unchanged - 0 fixed = 72 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 45s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.s3a.TestStreamChangeTracker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960097/HADOOP--15625-006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 91ebde3c5320 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a6ab371 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15974/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15974/artifact/out/whitespace-eol.txt
 |
| unit | 

[GitHub] bharatviswa504 opened a new pull request #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-25 Thread GitBox
bharatviswa504 opened a new pull request #518: HDDS-1178. Healthy pipeline 
Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777445#comment-16777445
 ] 

Steve Loughran edited comment on HADOOP-15625 at 2/26/19 12:55 AM:
---

Ben: I've done an iteration on this; handing back to you. The key change is 
(apart from moving the classes into a new package), pulling the change logic 
out of S3AInputStream and into a self contained change tracker class, whose 
logic we can test in a unit test. 

Testing: unit tests done, did a test of the input stream but not rerun it since 
my last set of changes. It's late. Sorry.

Handing it back to you for some of the todo list, especially: should we have an 
option to require version checking?

Other than that
* Style checking. Javadocs are critical as java 8 needs the "." at the end. 
Line length-wise, we like to keep down to nearly 80, due to the goal of 
side-by-side diff checking. I know, it'd be good to move on a some point, but 
until then...
* Docs. Something to discuss the option.
* core-site.xml: Add the default options there (etag @ warn?) alongside the 
other fs.s3a options. 

I've pushed up to github a branch which first applied your patch 005, then 
added the new code. If you cherry pick the head of that branch, your local 
branch will catch up
https://github.com/steveloughran/hadoop/tree/s3/HADOOP-15625-streamchanged




was (Author: ste...@apache.org):
Ben: I've done an interation on this; handing back to you. The key change is 
(apart from moving the classes into a new package), pulling the change logic 
out of S3AInputStream and into a self contained change tracker class, whose 
logic we can test in a unit test. 

Testing: unit tests done, did a test of the input stream but not rerun it since 
my last set of changes. It's late. Sorry.

Handing it back to you for some of the todo list, especially: should we have an 
option to require version checking?

Other than that
* Style checking. Javadocs are critical as java 8 needs the "." at the end. 
Line length-wise, we like to keep down to nearly 80, due to the goal of 
side-by-side diff checking. I know, it'd be good to move on a some point, but 
until then...
* Docs. Something to discuss the option.
* core-site.xml: Add the default options there (etag @ warn?) alongside the 
other fs.s3a options. 

I've pushed up to github a branch which first applied your patch 005, then 
added the new code. If you cherry pick the head of that branch, your local 
branch will catch up
https://github.com/steveloughran/hadoop/tree/s3/HADOOP-15625-streamchanged



> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777445#comment-16777445
 ] 

Steve Loughran commented on HADOOP-15625:
-

Ben: I've done an interation on this; handing back to you. The key change is 
(apart from moving the classes into a new package), pulling the change logic 
out of S3AInputStream and into a self contained change tracker class, whose 
logic we can test in a unit test. 

Testing: unit tests done, did a test of the input stream but not rerun it since 
my last set of changes. It's late. Sorry.

Handing it back to you for some of the todo list, especially: should we have an 
option to require version checking?

Other than that
* Style checking. Javadocs are critical as java 8 needs the "." at the end. 
Line length-wise, we like to keep down to nearly 80, due to the goal of 
side-by-side diff checking. I know, it'd be good to move on a some point, but 
until then...
* Docs. Something to discuss the option.
* core-site.xml: Add the default options there (etag @ warn?) alongside the 
other fs.s3a options. 

I've pushed up to github a branch which first applied your patch 005, then 
added the new code. If you cherry pick the head of that branch, your local 
branch will catch up
https://github.com/steveloughran/hadoop/tree/s3/HADOOP-15625-streamchanged



> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15625:

Status: Open  (was: Patch Available)

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15625:

Attachment: HADOOP--15625-006.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15625:

Status: Patch Available  (was: Open)

Patch 006


* Moved all the implementation classes to a new package fs.s3a.impl. This is 
just because I've been thinking about cleaning up the fs.s3a package at some 
point, and placing files in there makes it clear that this is for 
implementation only.
* new LogExactlyOnce class so that the FS logs on only one input stream if 
versioning = true but versioning isn't supported 
* Marked {{RemoteFileChangedException}} as public/unstable
* added change/revision policy
* Reopen now raises a PathIOException on a null wrapped stream. I don't think 
we've ever got to that codepath, but...

Big: moved all change tracking logic into 
org.apache.hadoop.fs.s3a.impl.ChangeTracker; this makes unit testing the 
failure logic easier, especially version mismatch.
Also handles the special case "null came back but we have no revision ID" with 
a different message. I don't know to how to create that, but it's there anyway 
and can be tested for.


TODO

* document
* create some of the output logs for invalid conditions and include in the 
troubleshooting doc
* Maybe: have the InconsistentAmazonS3Client simulate version inconsistency. 
But with the isolated change tracking, I don't see the point in that
* Now, should we be ruthless and add a "require versionID option" which fails 
fast if version = off? That way: you'll know you've got versioning.


> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777417#comment-16777417
 ] 

Hadoop QA commented on HADOOP-16127:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 103 unchanged - 3 fixed = 103 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16127 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960082/c16127_20190225.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f5b7d03406b6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9de34d2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15972/testReport/ |
| Max. process+thread count | 1354 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15972/console |
| Powered by | Apache Yetus 0.8.0   

[GitHub] hadoop-yetus commented on issue #517: HADOOP-16147: Allow CopyListing sequence file keys and values to be m…

2019-02-25 Thread GitBox
hadoop-yetus commented on issue #517: HADOOP-16147: Allow CopyListing sequence 
file keys and values to be m…
URL: https://github.com/apache/hadoop/pull/517#issuecomment-467223367
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 86 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1290 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 48 | trunk passed |
   | +1 | javadoc | 27 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 28 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-distcp: The patch generated 6 
new + 42 unchanged - 0 fixed = 48 total (was 42) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 900 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 902 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 4505 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/517 |
   | JIRA Issue | HADOOP-16147 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 01c252a2f85e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9de34d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/testReport/ |
   | Max. process+thread count | 294 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777378#comment-16777378
 ] 

Hadoop QA commented on HADOOP-16147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
6 new + 42 unchanged - 0 fixed = 48 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
2s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/517 |
| JIRA Issue | HADOOP-16147 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 01c252a2f85e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 9de34d2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/1/testReport/ |
| Max. process+thread count | 294 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 

[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777376#comment-16777376
 ] 

Hadoop QA commented on HADOOP-16127:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 103 unchanged - 3 fixed = 103 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16127 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960079/c16127_20190225.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3f243c0a39ca 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba4e7bd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15971/testReport/ |
| Max. process+thread count | 1452 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15971/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777369#comment-16777369
 ] 

Hadoop QA commented on HADOOP-16147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
6 new + 42 unchanged - 0 fixed = 48 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
56s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960083/HADOOP-16147-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 599d61265a1a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9de34d2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15973/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15973/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 

[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Attachment: c16127_20190225.patch

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-25 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-16147:
--
  Assignee: Andrew Olson
Attachment: HADOOP-16147-001.patch
Status: Patch Available  (was: Open)

Attached preliminary patch for review.

Also created pull request here: https://github.com/apache/hadoop/pull/517

> Allow CopyListing sequence file keys and values to be more easily customized
> 
>
> Key: HADOOP-16147
> URL: https://issues.apache.org/jira/browse/HADOOP-16147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Major
> Attachments: HADOOP-16147-001.patch
>
>
> We have encountered a scenario where, when using the Crunch library to run a 
> distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need 
> to dynamically rename target paths to the preferred destination output part 
> file names, rather than retaining the original source path names.
> A custom CopyListing implementation appears to be the proper solution for 
> this. However the place where the current SimpleCopyListing logic needs to be 
> adjusted is in a private method (writeToFileListing), so a relatively large 
> portion of the class would need to be cloned.
> To minimize the amount of code duplication required for such a custom 
> implementation, we propose adding two new protected methods to the 
> CopyListing class, that can be used to change the actual keys and/or values 
> written to the copy listing sequence file: 
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus);
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus);
> {noformat}
> The SimpleCopyListing class would then be modified to consume these methods 
> as follows,
> {noformat}
> fileListWriter.append(
>getFileListingKey(sourcePathRoot, fileStatus),
>getFileListingValue(fileStatus));
> {noformat}
> The default implementations would simply preserve the present behavior of the 
> SimpleCopyListing class, and could reside in either CopyListing or 
> SimpleCopyListing, whichever is preferable.
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus) {
>return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
> fileStatus.getPath()));
> }
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus) {
>return fileStatus;
> }
> {noformat}
> Please let me know if this proposal seems to be on the right track. If so I 
> can provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] noslowerdna opened a new pull request #517: HADOOP-16147: Allow CopyListing sequence file keys and values to be m…

2019-02-25 Thread GitBox
noslowerdna opened a new pull request #517: HADOOP-16147: Allow CopyListing 
sequence file keys and values to be m…
URL: https://github.com/apache/hadoop/pull/517
 
 
   …ore easily customized


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777319#comment-16777319
 ] 

Hadoop QA commented on HADOOP-15625:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 80 
new + 9 unchanged - 0 fixed = 89 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
33s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960074/HADOOP-15625-006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2594777a53db 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9537265 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15970/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15970/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15970/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777326#comment-16777326
 ] 

Hudson commented on HADOOP-16125:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16063/])
HADOOP-16125. Support multiple bind users in LdapGroupsMapping. (inigoiri: rev 
ba4e7bd1928a73d21a3dc5afb95f0d35d5b63000)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMappingWithBindUserSwitch.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/GroupsMapping.md
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMappingBase.java


> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, 
> HADOOP-16125.003.patch, HADOOP-16125.004.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-16125:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~lukmajercak] for the feature!
Committed [^HADOOP-16125.004.patch] to trunk.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, 
> HADOOP-16125.003.patch, HADOOP-16125.004.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Attachment: (was: c16127_20190225.patch)

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777310#comment-16777310
 ] 

Hudson commented on HADOOP-16126:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16062 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16062/])
HADOOP-16126. ipc.Client.stop() may sleep too long to wait for all (szetszwo: 
rev 0edb0c51dc2c4ae2f353e260f01912e28033d70f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777301#comment-16777301
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16127:
--

c16127_20190225.patch: fixes checkstyle warning.

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Attachment: c16127_20190225.patch

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch, 
> c16127_20190225.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16127:
-
Target Version/s: 3.3.0  (was: 3.1.2)

> Just have set target version to 3.1.2. Thanks.

Oops, it should be 3.3.0, the newest unreleased version.

> In ipc.Client, put a new connection could happen after stop
> ---
>
> Key: HADOOP-16127
> URL: https://issues.apache.org/jira/browse/HADOOP-16127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16127_20190219.patch, c16127_20190220.patch
>
>
> In getConnection(..), running can be initially true but becomes false before 
> putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-25 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16126:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks [~arpitagarwal] for the review and [~ste...@apache.org] for the comments.

I have committed this.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-25 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777292#comment-16777292
 ] 

Arpit Agarwal commented on HADOOP-16126:


+1

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777268#comment-16777268
 ] 

Ben Roling commented on HADOOP-15625:
-

New patch uploaded to make a minor variable name tweak in the tests.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch, HADOOP-15625-004.patch, HADOOP-15625-005.patch, 
> HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-25 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-006.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch, HADOOP-15625-004.patch, HADOOP-15625-005.patch, 
> HADOOP-15625-006.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-25 Thread Andrew Olson (JIRA)
Andrew Olson created HADOOP-16147:
-

 Summary: Allow CopyListing sequence file keys and values to be 
more easily customized
 Key: HADOOP-16147
 URL: https://issues.apache.org/jira/browse/HADOOP-16147
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Reporter: Andrew Olson


We have encountered a scenario where, when using the Crunch library to run a 
distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need to 
dynamically rename target paths to the preferred destination output part file 
names, rather than retaining the original source path names.

A custom CopyListing implementation appears to be the proper solution for this. 
However the place where the current SimpleCopyListing logic needs to be 
adjusted is in a private method (writeToFileListing), so a relatively large 
portion of the class would need to be cloned.

To minimize the amount of code duplication required for such a custom 
implementation, we propose adding two new protected methods to the CopyListing 
class, that can be used to change the actual keys and/or values written to the 
copy listing sequence file: 

{noformat}
protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
fileStatus);

protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
fileStatus);
{noformat}

The SimpleCopyListing class would then be modified to consume these methods as 
follows,
{noformat}
fileListWriter.append(
   getFileListingKey(sourcePathRoot, fileStatus),
   getFileListingValue(fileStatus));
{noformat}

The default implementations would simply preserve the present behavior of the 
SimpleCopyListing class, and could reside in either CopyListing or 
SimpleCopyListing, whichever is preferable.

{noformat}
protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
fileStatus) {
   return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
fileStatus.getPath()));
}

protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
fileStatus) {
   return fileStatus;
}
{noformat}

Please let me know if this proposal seems to be on the right track. If so I can 
provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16131) Support reencrypt in KMS Benchmark

2019-02-25 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang reassigned HADOOP-16131:
-

Assignee: George Huang

> Support reencrypt  in KMS Benchmark
> ---
>
> Key: HADOOP-16131
> URL: https://issues.apache.org/jira/browse/HADOOP-16131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
>
> It would be nice to support KMS reencrypt related operations -- reencrypt, 
> invalidateCache, rollNewVersion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16130) Support delegation token operations in KMS Benchmark

2019-02-25 Thread George Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Huang reassigned HADOOP-16130:
-

Assignee: George Huang

> Support delegation token operations in KMS Benchmark
> 
>
> Key: HADOOP-16130
> URL: https://issues.apache.org/jira/browse/HADOOP-16130
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
>
> At the last Hadoop Contributors Meetup, [~daryn] shared another KMS 
> throughput bottleneck is ZooKeeper -- KMS uses ZK to store delegation tokens. 
> ZK would be brought to a halt when expired delegation tokens are purged. That 
> sounds critical especially given that in most deployments KMS share the same 
> ZK quorum as HDFS, it would cause NameNode failover.
> The current KMS benchmark does not support delegation token operations 
> (addDelegationTokens, cancelDelegationToken, renewDelegationToken) so it's 
> hard to understand how bad it is, and hard to quantify the improvement of a 
> fix.
> File this jira to support those operations before we move on to the fix for 
> the ZK issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16035) Jenkinsfile for Hadoop

2019-02-25 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777132#comment-16777132
 ] 

Allen Wittenauer commented on HADOOP-16035:
---

bq.  Is there any suggestion how hdds/ozone/submarine projects can be 
supported? 

Like I told you back in October-ish? via the hadoop personality.  



> Jenkinsfile for Hadoop
> --
>
> Key: HADOOP-16035
> URL: https://issues.apache.org/jira/browse/HADOOP-16035
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16035.00.patch, HADOOP-16035.01.patch
>
>
> In order to enable Github Branch Source plugin on Jenkins to test Github PRs 
> with Apache Yetus:
> - an account that can read Github
> - Apache Yetus 0.9.0+
> - a Jenkinsfile that uses the above



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777086#comment-16777086
 ] 

Masatake Iwasaki commented on HADOOP-16068:
---

LGTM overall. Added tests including ITestAbfsDelegationTokens succeeded in 
Japan region. Some nits found on patch walk through.

AbfsDtFetcher.java: javadoc of getServiceName (just copied from HdfsDtFetcher) 
should be updated.

src/main/resources/META-INF/services/org.apache.hadoop.security.token.DtFetcher:
 having duplicate entries.

TestCustomOauthTokenProvider.java: subclass of AbstractAbfsIntegrationTest 
should be prefixed wth ITest*?

abfs.md:
{code:java}
The [az 
storage](https://docs.microsoft.com/en-us/cli/azure/storage/account?view=azure-cli-latest)
 subcommand
handles all storage commands, [`az storage account 
create`](https://docs.microsoft.com/en-us/cli/azure/storage/account?view=azure-cli-latest#az-storage-account-create)
{code}
Intended target of the first link would be 
[https://docs.microsoft.com/en-us/cli/azure/storage?view=azure-cli-latest].
{code:java}
You can list locations from az account list-locations
{code}
Following cmdlines does not show usage of list-locations.
{code:java}
or you can configure an identity to be used only for a specific storage account 
with
`fs.azure.account.oauth2.client.id.\.dfs.core.windows.net`.
{code}
You don't need backslash here.

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch, 
> HADOOP-16068-009.patch, HADOOP-16068-010.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777039#comment-16777039
 ] 

Hadoop QA commented on HADOOP-16068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 100 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960040/HADOOP-16068-010.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux a020e315b8f3 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6cec906 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15969/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15969/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 

[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Attachment: HADOOP-16068-010.patch

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch, 
> HADOOP-16068-009.patch, HADOOP-16068-010.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

HADOOP-16068 patch 010

HADOOP-16139

error message on Oauth problems improved; documentation covers it too

Tested: attempting to talk to Azure cardiff w/ OAuth auth, failing with Auth 
issues

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch, 
> HADOOP-16068-009.patch, HADOOP-16068-010.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Open  (was: Patch Available)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch, 
> HADOOP-16068-009.patch, HADOOP-16068-010.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-25 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776937#comment-16776937
 ] 

Stephen O'Donnell commented on HADOOP-16140:


Thanks all for looking into this.

The idea behind this jira is that any time I have seen a support case related 
to emptying the trash, the users think expunge should empty it immediately.

Expunge means "obliterate or remove completely", it does not do that, which is 
why its so confusing. So we can fix this in a few ways:

1. Make expunge actually empty the trash by default, which is what the command 
name suggests - I suspect we don't want to do this for compatibility reasons.

2. Add a flag to expunge (-immediate or -immediately), to override the current 
behaviour and clear the trash now. Having thought about this, I am coming 
around to this being the best idea.

3. Have a new emptyTrash command, which makes the purpose of expunge even more 
confusing.

Adam has suggested a dry-run option, and earlier in this thread Inigo suggest a 
confirmation message if you are emptying the trash now. I can see some merits 
on these, but even with the trash we see a remarkable number of cases where 
people accidentality delete data with -skipTrash. I fear we will see 
'accidental emptying of the trash' no matter what safety checks we add.

If the data gets into the trash, and the default expunge action is as before 
(ie retain trash for 24 hours by default), then if we ask for an "-immediate" 
flag to be past to delete it now, then we have already offered two lines of 
defence against accidental deletion. If we go that way, I think a confirmation 
message is unnecessary. I am not sure about the -dry-run option and how often 
it would be used over someone just listing the trash they are about to delete.

Steve also wants the ability to pass a filesystem as raised in HADOOP-13656 - I 
wonder if we should solve and commit this jira and then add in the filesystem 
switch afterwards (I am happy to work on it if we can get this one done).

I would also like the ability to pass the trash folder you wish to empty so you 
can empty your own trash in an EZ or a super user can clear any trash - that 
could be done here or in a follow up Jira too.

Can others chime on the best direction here? Ie:

1. Can we agree the best approach is adding "-immediate" to expunge and forget 
about the emptyTrash command?

2. Can we keep HADOOP-13656 separate and resolve it after this one?

3. We should allow a specific trash directory to be specified and do that in a 
separate Jira?

4. Should we add a dry-run option or not when -immediate is past?


> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776930#comment-16776930
 ] 

Steve Loughran edited comment on HADOOP-16068 at 2/25/19 2:30 PM:
--

log on oauth returns HTML for next patch. Doesn't fix problem, but helps
{code}
 bin/hadoop fs -ls abfs://contai...@abfswales1.dfs.core.windows.net/
14:22:01
2019-02-25 14:23:07,975 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:initialize(101)) - Initializing AzureBlobFileSystem 
for abfs://contai...@abfswales1.dfs.core.windows.net/
2019-02-25 14:23:08,182 [main] DEBUG services.AbfsClientThrottlingIntercept 
(AbfsClientThrottlingIntercept.java:initializeSingleton(62)) - Client-side 
throttling is enabled for the ABFS file system.
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:getFileStatus(434)) - 
AzureBlobFileSystem.getFileStatus path: 
abfs://contai...@abfswales1.dfs.core.windows.net/
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:performAbfsAuthCheck(1101)) - ABFS authorizer is not 
initialized. No authorization check will be performed.
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystemStore 
(AzureBlobFileSystemStore.java:getIsNamespaceEnabled(200)) - Get root ACL status
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(AccessTokenProvider.java:isTokenAboutToExpire(77)) - AADToken: no token. 
Returning expiring=true
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(AccessTokenProvider.java:getToken(48)) - AAD Token is missing or expired: 
Calling refresh-token from abstract base class
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(ClientCredsTokenProvider.java:refreshToken(57)) - AADToken: refreshing 
client-credential based token
2019-02-25 14:23:08,274 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenUsingClientCreds(94)) - AADToken: starting 
to fetch token using client creds for client ID 
40c8b4e5-865a-4297-bbbd-195df2a8a806
2019-02-25 14:23:08,274 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenSingleCall(297)) - Requesting an OAuth token 
by POST to 
https://login.microsoftonline.com/b60c9401-2154-40aa-9cff-5e3d1a20085d/oauth2/authorize
2019-02-25 14:23:08,275 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(51)) - Request Headers
2019-02-25 14:23:08,275 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Connection=close
2019-02-25 14:23:08,872 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenSingleCall(319)) - Response 200
2019-02-25 14:23:08,872 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(51)) - Response Headers
2019-02-25 14:23:08,873 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   HTTP Response=HTTP/1.1 200 OK
2019-02-25 14:23:08,873 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Server=Microsoft-IIS/10.0
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   X-Content-Type-Options=nosniff
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Connection=close
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Pragma=no-cache
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   P3P=CP="DSP CUR OTPi IND OTRi 
ONL FIN"
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Date=Mon, 25 Feb 2019 14:23:08 
GMT
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   X-Frame-Options=DENY
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   
Strict-Transport-Security=max-age=31536000; includeSubDomains
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Cache-Control=no-cache, 
no-store
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   X-DNS-Prefetch-Control=on
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Expires=-1
2019-02-25 14:23:08,876 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Content-Length=27304
2019-02-25 14:23:08,876 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   
x-ms-request-id=a48ce162-4579-4ae7-ac93-97e304b05400
2019-02-25 14:23:08,876 [main] DEBUG services.AbfsIoUtils 

[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776930#comment-16776930
 ] 

Steve Loughran commented on HADOOP-16068:
-

log on oauth returns HTML for next patch. Doesn't fix problem, but helps
{code}
 bin/hadoop fs -ls abfs://contai...@abfswales1.dfs.core.windows.net/
14:22:01
2019-02-25 14:23:07,975 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:initialize(101)) - Initializing AzureBlobFileSystem 
for abfs://contai...@abfswales1.dfs.core.windows.net/
2019-02-25 14:23:08,182 [main] DEBUG services.AbfsClientThrottlingIntercept 
(AbfsClientThrottlingIntercept.java:initializeSingleton(62)) - Client-side 
throttling is enabled for the ABFS file system.
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:getFileStatus(434)) - 
AzureBlobFileSystem.getFileStatus path: 
abfs://contai...@abfswales1.dfs.core.windows.net/
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystem 
(AzureBlobFileSystem.java:performAbfsAuthCheck(1101)) - ABFS authorizer is not 
initialized. No authorization check will be performed.
2019-02-25 14:23:08,204 [main] DEBUG azurebfs.AzureBlobFileSystemStore 
(AzureBlobFileSystemStore.java:getIsNamespaceEnabled(200)) - Get root ACL status
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(AccessTokenProvider.java:isTokenAboutToExpire(77)) - AADToken: no token. 
Returning expiring=true
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(AccessTokenProvider.java:getToken(48)) - AAD Token is missing or expired: 
Calling refresh-token from abstract base class
2019-02-25 14:23:08,272 [main] DEBUG oauth2.AccessTokenProvider 
(ClientCredsTokenProvider.java:refreshToken(57)) - AADToken: refreshing 
client-credential based token
2019-02-25 14:23:08,274 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenUsingClientCreds(94)) - AADToken: starting 
to fetch token using client creds for client ID 
40c8b4e5-865a-4297-bbbd-195df2a8a806
2019-02-25 14:23:08,274 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenSingleCall(297)) - Requesting an OAuth token 
by POST to 
https://login.microsoftonline.com/b60c9401-2154-40aa-9cff-5e3d1a20085d/oauth2/authorize
2019-02-25 14:23:08,275 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(51)) - Request Headers
2019-02-25 14:23:08,275 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Connection=close
2019-02-25 14:23:08,872 [main] DEBUG oauth2.AzureADAuthenticator 
(AzureADAuthenticator.java:getTokenSingleCall(319)) - Response 200
2019-02-25 14:23:08,872 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(51)) - Response Headers
2019-02-25 14:23:08,873 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   HTTP Response=HTTP/1.1 200 OK
2019-02-25 14:23:08,873 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Server=Microsoft-IIS/10.0
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   X-Content-Type-Options=nosniff
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Connection=close
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Pragma=no-cache
2019-02-25 14:23:08,874 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   P3P=CP="DSP CUR OTPi IND OTRi 
ONL FIN"
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Date=Mon, 25 Feb 2019 14:23:08 
GMT
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   X-Frame-Options=DENY
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   
Strict-Transport-Security=max-age=31536000; includeSubDomains
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   Cache-Control=no-cache, 
no-store
2019-02-25 14:23:08,875 [main] DEBUG services.AbfsIoUtils 
(AbfsIoUtils.java:dumpHeadersToDebugLog(54)) -   
Set-Cookie=stsservicecookie=ests; path=/; secure; 
HttpOnly;x-ms-gateway-slice=prod; path=/; secure; 
HttpOnly;esctx=AQABAACEfexXxjamQb3OeGQ4GugvJnqCleb5fTHXUNtG2e2vTfvF4cV_3pPI9WICNEPT-85W2F8bAZgHI_L1btZuYPY5pru5civf8jCRgzmUuU5mfcyeeYApPXDdetus1cHPtPEe7cNjOewm_S0dVIvGvA3pS87ggExUYAXwnRzRj55Z1T7ftN444Vg3QHApA6jYvewgAA;
 domain=.login.microsoftonline.com; path=/; secure; 
HttpOnly;fpc=AqAbRMgiP0pMldA4zkrmUY36f-C2AQAAAEzxBdQO; expires=Wed, 
27-Mar-2019 14:23:08 GMT; path=/; secure; 

[GitHub] hadoop-yetus commented on issue #508: HDDS-1151. Propagate the tracing id in ScmBlockLocationProtocol

2019-02-25 Thread GitBox
hadoop-yetus commented on issue #508: HDDS-1151. Propagate the tracing id in 
ScmBlockLocationProtocol
URL: https://github.com/apache/hadoop/pull/508#issuecomment-467017438
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 535 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 79 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1244 | trunk passed |
   | -1 | compile | 211 | root in trunk failed. |
   | +1 | checkstyle | 212 | trunk passed |
   | -1 | mvnsite | 20 | server-scm in trunk failed. |
   | -1 | mvnsite | 18 | objectstore-service in trunk failed. |
   | -1 | mvnsite | 16 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 1076 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 14 | server-scm in trunk failed. |
   | -1 | findbugs | 14 | objectstore-service in trunk failed. |
   | -1 | findbugs | 14 | ozone-manager in trunk failed. |
   | -1 | javadoc | 13 | server-scm in trunk failed. |
   | -1 | javadoc | 14 | objectstore-service in trunk failed. |
   | -1 | javadoc | 14 | ozone-manager in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 10 | server-scm in the patch failed. |
   | -1 | mvninstall | 11 | objectstore-service in the patch failed. |
   | -1 | mvninstall | 10 | ozone-manager in the patch failed. |
   | +1 | compile | 988 | the patch passed |
   | -1 | cc | 988 | root generated 7 new + 2 unchanged - 0 fixed = 9 total 
(was 2) |
   | -1 | javac | 988 | root generated 660 new + 831 unchanged - 0 fixed = 1491 
total (was 831) |
   | +1 | checkstyle | 203 | the patch passed |
   | -1 | mvnsite | 23 | server-scm in the patch failed. |
   | -1 | mvnsite | 22 | objectstore-service in the patch failed. |
   | -1 | mvnsite | 19 | ozone-manager in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 711 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 23 | server-scm in the patch failed. |
   | -1 | findbugs | 22 | objectstore-service in the patch failed. |
   | -1 | findbugs | 22 | ozone-manager in the patch failed. |
   | -1 | javadoc | 23 | server-scm in the patch failed. |
   | -1 | javadoc | 22 | objectstore-service in the patch failed. |
   | -1 | javadoc | 23 | ozone-manager in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 86 | common in the patch failed. |
   | -1 | unit | 22 | server-scm in the patch failed. |
   | -1 | unit | 22 | objectstore-service in the patch failed. |
   | -1 | unit | 23 | ozone-manager in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6161 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/508 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux ad263b04c014 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cec906 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-compile-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-mvnsite-hadoop-hdds_server-scm.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-mvnsite-hadoop-ozone_objectstore-service.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-508/1/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | javadoc | 

[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776860#comment-16776860
 ] 

Steve Loughran commented on HADOOP-16140:
-

bq.  I wonder whether it would make sense to implement something like git clean 
--dry-run: 

that's a really good idea


> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #490: HDDS-1113. Remove default dependencies from hadoop-ozone project

2019-02-25 Thread GitBox
hadoop-yetus commented on issue #490: HDDS-1113. Remove default dependencies 
from hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/490#issuecomment-467004265
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1019 | trunk passed |
   | -1 | compile | 18 | hadoop-ozone in trunk failed. |
   | -1 | mvnsite | 18 | hadoop-ozone in trunk failed. |
   | -1 | mvnsite | 19 | client in trunk failed. |
   | -1 | mvnsite | 17 | common in trunk failed. |
   | -1 | mvnsite | 22 | integration-test in trunk failed. |
   | -1 | mvnsite | 21 | objectstore-service in trunk failed. |
   | -1 | mvnsite | 19 | ozone-manager in trunk failed. |
   | -1 | mvnsite | 42 | s3gateway in trunk failed. |
   | -1 | mvnsite | 18 | tools in trunk failed. |
   | +1 | shadedclient | 1860 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 13 | hadoop-ozone in trunk failed. |
   | -1 | javadoc | 15 | client in trunk failed. |
   | -1 | javadoc | 17 | common in trunk failed. |
   | -1 | javadoc | 15 | integration-test in trunk failed. |
   | -1 | javadoc | 18 | objectstore-service in trunk failed. |
   | -1 | javadoc | 17 | ozone-manager in trunk failed. |
   | -1 | javadoc | 17 | s3gateway in trunk failed. |
   | -1 | javadoc | 14 | tools in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 8 | Maven dependency ordering for patch |
   | -1 | mvninstall | 14 | hadoop-ozone in the patch failed. |
   | -1 | mvninstall | 10 | client in the patch failed. |
   | -1 | mvninstall | 10 | common in the patch failed. |
   | -1 | mvninstall | 9 | integration-test in the patch failed. |
   | -1 | mvninstall | 10 | objectstore-service in the patch failed. |
   | -1 | mvninstall | 10 | ozone-manager in the patch failed. |
   | -1 | mvninstall | 10 | s3gateway in the patch failed. |
   | -1 | mvninstall | 11 | tools in the patch failed. |
   | -1 | compile | 11 | hadoop-ozone in the patch failed. |
   | -1 | javac | 11 | hadoop-ozone in the patch failed. |
   | -1 | mvnsite | 16 | hadoop-ozone in the patch failed. |
   | -1 | mvnsite | 10 | client in the patch failed. |
   | -1 | mvnsite | 10 | common in the patch failed. |
   | -1 | mvnsite | 10 | integration-test in the patch failed. |
   | -1 | mvnsite | 11 | objectstore-service in the patch failed. |
   | -1 | mvnsite | 12 | ozone-manager in the patch failed. |
   | -1 | mvnsite | 12 | s3gateway in the patch failed. |
   | -1 | mvnsite | 11 | tools in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 11 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | javadoc | 14 | client in the patch failed. |
   | -1 | javadoc | 16 | common in the patch failed. |
   | -1 | javadoc | 15 | integration-test in the patch failed. |
   | -1 | javadoc | 15 | objectstore-service in the patch failed. |
   | -1 | javadoc | 15 | ozone-manager in the patch failed. |
   | -1 | javadoc | 16 | s3gateway in the patch failed. |
   | -1 | javadoc | 16 | tools in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 17 | hadoop-ozone in the patch failed. |
   | -1 | unit | 14 | client in the patch failed. |
   | -1 | unit | 15 | common in the patch failed. |
   | -1 | unit | 15 | integration-test in the patch failed. |
   | -1 | unit | 15 | objectstore-service in the patch failed. |
   | -1 | unit | 15 | ozone-manager in the patch failed. |
   | -1 | unit | 15 | s3gateway in the patch failed. |
   | -1 | unit | 14 | tools in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3334 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/490 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux e851ef1fcbf6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cec906 |
   | 

[GitHub] hadoop-yetus commented on issue #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN.

2019-02-25 Thread GitBox
hadoop-yetus commented on issue #516: HADOOP-16146. Make start-build-env.sh 
safe in case of misusage of DOCKER_INTERACTIVE_RUN.
URL: https://github.com/apache/hadoop/pull/516#issuecomment-466981724
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1007 | trunk passed |
   | +1 | mvnsite | 781 | trunk passed |
   | +1 | shadedclient | 647 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 998 | the patch passed |
   | +1 | mvnsite | 758 | the patch passed |
   | +1 | shellcheck | 0 | The patch generated 0 new + 0 unchanged - 1 fixed = 
0 total (was 1) |
   | +1 | shelldocs | 17 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 1058 | root in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6070 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-516/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/516 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  |
   | uname | Linux 3d9c839900b3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cec906 |
   | maven | version: Apache Maven 3.3.9 |
   | shellcheck | v0.4.6 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-516/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-516/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16145) Add Quota Preservation to DistCp

2019-02-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776762#comment-16776762
 ] 

Steve Loughran commented on HADOOP-16145:
-

can you link to related bits of work? Is this just going to be HDFS? What will 
happen if the destination is a different FS?

> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN

2019-02-25 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-16146:
--
Status: Patch Available  (was: Open)

> Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
> --
>
> Key: HADOOP-16146
> URL: https://issues.apache.org/jira/browse/HADOOP-16146
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> [~aw] reported the problem in HDDS-891:
> {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line 
> options to docker. Most notably, -c and -v and a few others that share one 
> particular characteristic: they reference the file system. As soon as shell 
> code hits the file system, it is no longer safe to assume space delimited 
> options. In other words, -c /My Cool Filesystem/Docker Files/config.json or 
> -v /c_drive/Program Files/Data:/data may be something a user wants to do, but 
> the script now breaks because of the IFS assumptions.
> {quote}
> DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in 
> docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker 
> container is started without the "-i -t" flags.
> It can be improved by checking the value of the environment variable and 
> enable only fixed set of values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN.

2019-02-25 Thread GitBox
elek opened a new pull request #516: HADOOP-16146. Make start-build-env.sh safe 
in case of misusage of DOCKER_INTERACTIVE_RUN.
URL: https://github.com/apache/hadoop/pull/516
 
 
   See: https://issues.apache.org/jira/browse/HADOOP-16146
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16035) Jenkinsfile for Hadoop

2019-02-25 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776682#comment-16776682
 ] 

Elek, Marton commented on HADOOP-16035:
---

FTR: This Jenkins file doesn't support the optional subprojects (like submarine 
and ozone) and generates a lot of noise by 
[https://builds.apache.org/job/hadoop-multibranch]. Is there any suggestion how 
hdds/ozone/submarine projects can be supported? (See for example this report: 
https://github.com/apache/hadoop/pull/513)

> Jenkinsfile for Hadoop
> --
>
> Key: HADOOP-16035
> URL: https://issues.apache.org/jira/browse/HADOOP-16035
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16035.00.patch, HADOOP-16035.01.patch
>
>
> In order to enable Github Branch Source plugin on Jenkins to test Github PRs 
> with Apache Yetus:
> - an account that can read Github
> - Apache Yetus 0.9.0+
> - a Jenkinsfile that uses the above



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN

2019-02-25 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-16146:
-

 Summary: Make start-build-env.sh safe in case of misusage of 
DOCKER_INTERACTIVE_RUN
 Key: HADOOP-16146
 URL: https://issues.apache.org/jira/browse/HADOOP-16146
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


[~aw] reported the problem in HDDS-891:
{quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line 
options to docker. Most notably, -c and -v and a few others that share one 
particular characteristic: they reference the file system. As soon as shell 
code hits the file system, it is no longer safe to assume space delimited 
options. In other words, -c /My Cool Filesystem/Docker Files/config.json or -v 
/c_drive/Program Files/Data:/data may be something a user wants to do, but the 
script now breaks because of the IFS assumptions.
{quote}
DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in 
docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker container 
is started without the "-i -t" flags.

It can be improved by checking the value of the environment variable and enable 
only fixed set of values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-25 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776663#comment-16776663
 ] 

Adam Antal commented on HADOOP-16140:
-

I wonder whether it would make sense to implement something like {{git clean 
--dry-run}}: it does not remove anything, but shows you what the command would 
have been done. I think this would be helpful for the user, though technically 
do nothing else than just a {{dfs -ls}} on trash root. Maybe {{-emptyTrash}} 
command and this extra option could wipe out the confusion about HDFS trash.

What is your opinion?

For your patch uploaded, please add some unit tests and CLI tests for the new 
command.

> Add emptyTrash option to purge trash immediately
> 
>
> Key: HADOOP-16140
> URL: https://issues.apache.org/jira/browse/HADOOP-16140
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14200.001.patch
>
>
> I have always felt the HDFS trash is missing a simple way to empty the 
> current users trash immediately. We have "expunge" but in my experience 
> supporting clusters, end users find this confusing. When most end users run 
> expunge, they really want to empty their trash immediately and get confused 
> when expunge does not do this.
> This can result in users performing somewhat dangerous "skipTrash" operations 
> on the trash to free up space. The alternative, which most users will not 
> figure out on their own is:
> # Run the expunge command once - this will move the current folder to a 
> checkpoint and remove any old checkpoints older than the retention interval
> # Wait over 1 minute and then run expunge again, overriding fs.trash.interval 
> to 1 minute using the following command hadoop fs -Dfs.trash.interval=1 
> -expunge.
> With this Jira I am proposing to add a extra command, "hdfs dfs -emptyTrash" 
> that purges everything in the logged in users Trash directories immediately.
> How would the community feel about adding this new option? I will upload a 
> patch for comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org