[jira] [Updated] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting

2018-04-04 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-14855:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> Hadoop scripts may errantly believe a daemon is still running, preventing it 
> from starting
> --
>
> Key: HADOOP-14855
> URL: https://issues.apache.org/jira/browse/HADOOP-14855
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha4
>Reporter: Aaron T. Myers
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14855.001.patch, HADOOP-14855.002.patch
>
>
> I encountered a case recently where the NN wouldn't start, with the error 
> message "namenode is running as process 16769.  Stop it first." In fact the 
> NN was not running at all, but rather another long-running process was 
> running with this pid.
> It looks to me like our scripts just check to see if _any_ process is running 
> with the pid that the NN (or any Hadoop daemon) most recently ran with. This 
> is clearly not a fool-proof way of checking to see if a particular type of 
> daemon is now running, as some other process could start running with the 
> same pid since the daemon in question was previously shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting

2018-04-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426280#comment-16426280
 ] 

Miklos Szegedi commented on HADOOP-14855:
-

Committed to trunk. Thank you for the patch [~rkanter] and for the reviews 
[~aw], [~atm] and [~ste...@apache.org].

> Hadoop scripts may errantly believe a daemon is still running, preventing it 
> from starting
> --
>
> Key: HADOOP-14855
> URL: https://issues.apache.org/jira/browse/HADOOP-14855
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha4
>Reporter: Aaron T. Myers
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14855.001.patch, HADOOP-14855.002.patch
>
>
> I encountered a case recently where the NN wouldn't start, with the error 
> message "namenode is running as process 16769.  Stop it first." In fact the 
> NN was not running at all, but rather another long-running process was 
> running with this pid.
> It looks to me like our scripts just check to see if _any_ process is running 
> with the pid that the NN (or any Hadoop daemon) most recently ran with. This 
> is clearly not a fool-proof way of checking to see if a particular type of 
> daemon is now running, as some other process could start running with the 
> same pid since the daemon in question was previously shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting

2018-04-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426271#comment-16426271
 ] 

Miklos Szegedi commented on HADOOP-14855:
-

+1 LGTM. I will commit this shortly.

> Hadoop scripts may errantly believe a daemon is still running, preventing it 
> from starting
> --
>
> Key: HADOOP-14855
> URL: https://issues.apache.org/jira/browse/HADOOP-14855
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha4
>Reporter: Aaron T. Myers
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-14855.001.patch, HADOOP-14855.002.patch
>
>
> I encountered a case recently where the NN wouldn't start, with the error 
> message "namenode is running as process 16769.  Stop it first." In fact the 
> NN was not running at all, but rather another long-running process was 
> running with this pid.
> It looks to me like our scripts just check to see if _any_ process is running 
> with the pid that the NN (or any Hadoop daemon) most recently ran with. This 
> is clearly not a fool-proof way of checking to see if a particular type of 
> daemon is now running, as some other process could start running with the 
> same pid since the daemon in question was previously shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15040) Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ Yarn Log Aggregation

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15040:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Upgrade AWS SDK to 1.11.271: NPE bug spams logs w/ Yarn Log Aggregation
> ---
>
> Key: HADOOP-15040
> URL: https://issues.apache.org/jira/browse/HADOOP-15040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15040.001.patch
>
>
> My colleagues working with Yarn log aggregation found that they were getting 
> this message spammed in their logs when they used an s3a:// URI for logs 
> (yarn.nodemanager.remote-app-log-dir):
> {noformat}
> getting attribute Region of com.amazonaws.management:type=AwsSdkMetrics threw 
> an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>   at 
> 
> Caused by: java.lang.NullPointerException
>   at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729)
>   at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67)
>   at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}
> This happens even though the aws sdk cloudwatch metrics reporting was 
> disabled (default), which is a bug. 
> I filed a [github issue|https://github.com/aws/aws-sdk-java/issues/1375|] and 
> it looks like a fix should be coming around SDK release 1.11.229 or so.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15364) Add support for S3 Select to S3A

2018-04-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15364:
---

 Summary: Add support for S3 Select to S3A
 Key: HADOOP-15364
 URL: https://issues.apache.org/jira/browse/HADOOP-15364
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Assignee: Steve Loughran


Expect a PoC patch for this in a couple of days; 

* it'll depend on an SDK update to work, plus a couple of of other minor changes
* Adds command line option too 
{code}
hadoop s3guard select -header use -compression gzip -limit 100 
s3a://landsat-pds/scene_list.gz" \
"SELECT s.entityId FROM S3OBJECT s WHERE s.cloudCover = '0.0' "
{code}

For wider use we'll need to implement the HADOOP-15229 so that callers can pass 
down the expression along with any other parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426321#comment-16426321
 ] 

genericqa commented on HADOOP-15357:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 243 unchanged - 1 fixed = 243 total (was 244) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15357 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917611/HADOOP-15357.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 291b683cc295 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3087e89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14435/testReport/ |
| Max. process+thread count | 1718 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14435/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Configuration.getPropsWithPrefix no longer does variable 

[jira] [Updated] (HADOOP-15273) distcp can't handle remote stores with different checksum algorithms

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15273:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> distcp can't handle remote stores with different checksum algorithms
> 
>
> Key: HADOOP-15273
> URL: https://issues.apache.org/jira/browse/HADOOP-15273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15273-001.patch, HADOOP-15273-002.patch, 
> HADOOP-15273-003.patch
>
>
> When using distcp without {{-skipcrcchecks}} . If there's a checksum mismatch 
> between src and dest store types (e.g hdfs to s3), then the error message 
> will talk about blocksize, even when its the underlying checksum protocol 
> itself which is the cause for failure
> bq. Source and target differ in block-size. Use -pb to preserve block-sizes 
> during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. 
> (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
> during file-transfer.)
> update:  the CRC check takes always place on a distcp upload before the file 
> is renamed into place. *and you can't disable it then*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15289) FileStatus.readFields() assertion incorrect

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-15289:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> FileStatus.readFields() assertion incorrect
> ---
>
> Key: HADOOP-15289
> URL: https://issues.apache.org/jira/browse/HADOOP-15289
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.1.0, 3.0.3
>
> Attachments: HADOOP-15289-001.patch
>
>
> As covered inHBASE-20123,  "Backup test fails against hadoop 3; ", I think 
> the assert at the end of {{FileStatus.readFields()}} is wrong; if you run the 
> code with assert=true against a directory, an IOE will get raised.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14855) Hadoop scripts may errantly believe a daemon is still running, preventing it from starting

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426298#comment-16426298
 ] 

Hudson commented on HADOOP-14855:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13927 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13927/])
HADOOP-14855. Hadoop scripts may errantly believe a daemon is still (szegedim: 
rev e52539b46fb13db423490fe02d46e9fae72d72fe)
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> Hadoop scripts may errantly believe a daemon is still running, preventing it 
> from starting
> --
>
> Key: HADOOP-14855
> URL: https://issues.apache.org/jira/browse/HADOOP-14855
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha4
>Reporter: Aaron T. Myers
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-14855.001.patch, HADOOP-14855.002.patch
>
>
> I encountered a case recently where the NN wouldn't start, with the error 
> message "namenode is running as process 16769.  Stop it first." In fact the 
> NN was not running at all, but rather another long-running process was 
> running with this pid.
> It looks to me like our scripts just check to see if _any_ process is running 
> with the pid that the NN (or any Hadoop daemon) most recently ran with. This 
> is clearly not a fool-proof way of checking to see if a particular type of 
> daemon is now running, as some other process could start running with the 
> same pid since the daemon in question was previously shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426438#comment-16426438
 ] 

Aaron Fabbri commented on HADOOP-14759:
---

This looks pretty good, thanks for the work on this useful feature.

{noformat}
+```bash
+hadoop s3guard prune -hours 1 -minutes 30 -meta 
dynamodb://ireland-team/path_prefix/ -region eu-west-1
+```
+
+Delete all entries more than 90 minutes old from the table "ireland-team" with
+prefix "path_prefix" in the region "eu-west-1".
{noformat}

I think the path_prefix goes in the s3a:// URI, not the MetadataStore URI, 
right?  I tested this like so:

{noformat}
hadoop s3guard prune -hours 24 s3a://my-bucket/stuffs/c
{noformat}

and confirmed that it only pruned the entries starting with /stuffs/c, as 
expected.  I also ran the integration tests in us-west-2. I'm +1 on the patch 
once the docs are fixed.


> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch, HADOOP-14759.005.patch, 
> HADOOP-14759.006.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-04 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426501#comment-16426501
 ] 

Xiao Chen commented on HADOOP-14445:


Thanks again for the reviews Rushabh and Wei-Chiu.

Testing this in real clusters revealed some issues in the patch. 
[^HADOOP-14445.12.patch] addressed them. Namely:
- {{KMSCP#addDelegationTokens}} should only setService on KMS_D_T tokens, so if 
there is an old server returning kms-dt, it would still work
- {{KMSCP#selectKMSDelegationToken}} the fall back logic should use existing 
logic to {{getToken}} by service, instead of using a selector. This way we can 
be sure new client works with old submitter + new server. Added a detailed 
comment there.
- Also added a 'real' unit test {{TestKMSClientProvider}} to test these 
explicitly, as a cover up of the existing TestKMS cases.

I have tested the latest patch via wordcount (in an env with 3 NM, 2 KMS. RM 
host does not have either NM or KMS, and was used as job submitter):

- upgrade 1 NM
- upgrade 1 KMS
- upgrade both KMS
- upgrade all NM
- upgrade RM

Job ran at each step, verified from debug level yarn app logs that 
authentication was successful using tokens.
In the end, deployed the new config=false everywhere and verified things still 
work.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-04 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14445:
---
Attachment: HADOOP-14445.12.patch

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15361:
-

 Summary: RawLocalFileSystem should use Java nio framework for 
rename
 Key: HADOOP-15361
 URL: https://issues.apache.org/jira/browse/HADOOP-15361
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor


Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
The fallback logic is a copy-on-fail logic so when rename fails it copies the 
source then delete it.
 An additional fallback logic was needed for Windows to provide POSIX rename 
behavior.

Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
(HADOOP-13082).

With using Java nio framework both could be eliminated since it is not platform 
dependent and provides cross-volume rename.

In addition the fallback logic for Windows is not correct since Java io 
overrides the destination only if the source is also a directory but 
handleEmptyDstDirectoryOnWindows method checks only the destination. That means 
rename allows to override a directory with a file on Windows but not on Unix.

File#renameTo and Files#move are not 100% compatible:
 If the source is a directory and the destination is an empty directory 
File#renameTo overrides the source but Files#move is does not. We have to use 
{{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
if the source or the destination is a file. So to make them compatible we have 
to check that the either the source or the destination is a directory before we 
add the copy option.

I think the correct strategy is
 * Where the contract test passed so far it should pass after this
 * Where the contract test failed because of Java specific think and not 
because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14651) Update okhttp version to 2.7.5

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14651:

   Resolution: Fixed
Fix Version/s: 3.0.2
   2.9.1
   Status: Resolved  (was: Patch Available)

applied to branch-2; reran all the ADL tests for validation. One transient 
failure of a test in {{TestAdlFileContextMainOperationsLive}} which went away 
on the second attempt

> Update okhttp version to 2.7.5
> --
>
> Key: HADOOP-14651
> URL: https://issues.apache.org/jira/browse/HADOOP-14651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.1.0, 2.9.1, 3.0.2
>
> Attachments: HADOOP-14651-branch-2.0.004.patch, 
> HADOOP-14651-branch-2.0.004.patch, HADOOP-14651-branch-3.0.004.patch, 
> HADOOP-14651-branch-3.0.004.patch, HADOOP-14651.001.patch, 
> HADOOP-14651.002.patch, HADOOP-14651.003.patch, HADOOP-14651.004.patch
>
>
> The current artifact is:
> com.squareup.okhttp:okhttp:2.4.0
> That version could either be bumped to 2.7.5 (the latest of that line), or 
> use the latest artifact:
> com.squareup.okhttp3:okhttp:3.8.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Status: Patch Available  (was: Open)

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14759:

Status: Patch Available  (was: Open)

Submitting patch 6 which is rebased on top of HADOOP-14758, so it can be 
committed without conflict.

Tests ran on us-west-2 successfully. This included both unit (mvn test) and 
integration tests (mvn verify).

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch, HADOOP-14759.005.patch, 
> HADOOP-14759.006.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13982) ADL To map container 403 to AccessDeniedException

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13982:

Summary: ADL To map container 403 to AccessDeniedException  (was: Print 
better error when accessing a store without permission)

> ADL To map container 403 to AccessDeniedException
> -
>
> Key: HADOOP-13982
> URL: https://issues.apache.org/jira/browse/HADOOP-13982
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Atul Sikaria
>Priority: Major
>  Labels: supportability
>
> The error message when accessing a store without permission is not user 
> friendly:
> {noformat}
> $ hdfs dfs -ls adl://STORE.azuredatalakestore.net/
> ls: Operation GETFILESTATUS failed with HTTP403 : null
> {noformat}
> Store {{STORE}} exists but Hadoop is configured with an SPI that does not 
> have access to the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user

2018-04-04 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425734#comment-16425734
 ] 

Bharat Viswanadham commented on HADOOP-12953:
-

[~udayk] Are you still working on this?

 

> New API for libhdfs to get FileSystem object as a proxy user
> 
>
> Key: HADOOP-12953
> URL: https://issues.apache.org/jira/browse/HADOOP-12953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Uday Kale
>Assignee: Uday Kale
>Priority: Major
> Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch
>
>
> Secure impersonation in HDFS needs users to create proxy users and work with 
> those. In libhdfs, the hdfsBuilder accepts a userName but calls 
> FileSytem.get() or FileSystem.newInstance() with the user name to connect as. 
> But, both these interfaces use getBestUGI() to get the UGI for the given 
> user. This is not necessarily true for all services whose end-users would not 
> access HDFS directly, but go via the service to first get authenticated with 
> LDAP, then the service owner can impersonate the end-user to eventually 
> provide the underlying data.
> For such services that authenticate end-users via LDAP, the end users are not 
> authenticated by Kerberos, so their authentication details wont be in the 
> Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this 
> either. 
> Hence the need for the new API for libhdfs to get the FileSystem object as a 
> proxy user using the 'secure impersonation' recommendations. This approach is 
>  secure since HDFS authenticates the service owner and then validates the 
> right for the service owner to impersonate the given user as allowed by 
> hadoop.proxyusers.* parameters of HDFS config.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15361:
--
Attachment: HADOOP-15361.01.patch

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14759:

Attachment: HADOOP-14759.006.patch

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch, HADOOP-14759.005.patch, 
> HADOOP-14759.006.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14759:

Status: Open  (was: Patch Available)

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch, HADOOP-14759.005.patch, 
> HADOOP-14759.006.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14122) Add ADLS to hadoop-cloud-storage-project

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14122.
-
   Resolution: Duplicate
 Assignee: Steve Loughran
Fix Version/s: 3.1.0

> Add ADLS to hadoop-cloud-storage-project
> 
>
> Key: HADOOP-14122
> URL: https://issues.apache.org/jira/browse/HADOOP-14122
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.1.0
>
>
> Add hadoop-azure-datalake to hadoop-cloud-storage-project.
> HADOOP-13687 did include hadoop-azure-datalake at one point.
> [~cnauroth], could you comment?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425780#comment-16425780
 ] 

genericqa commented on HADOOP-14759:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
29s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-14759 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917554/HADOOP-14759.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb4518a49270 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b779f4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14432/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14432/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: 

[jira] [Commented] (HADOOP-13982) ADL To map container 403 to AccessDeniedException

2018-04-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425674#comment-16425674
 ] 

Steve Loughran commented on HADOOP-13982:
-

present in trunk (without the pending SDK updates). 403 should map to an 
AccessDeniedException.changing title

Full stack

{code}
com.microsoft.azure.datalake.store.ADLException: Error getting info for file /
Operation GETFILESTATUS failed with HTTP403 : null
Last encountered exception thrown after 1 tries. [HTTP403(null)]
 [ServerRequestId:ecd3c5d0-352a-4661-8ff0-15bfadd138cf]
at 
com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1169)
at 
com.microsoft.azure.datalake.store.ADLStoreClient.getDirectoryEntry(ADLStoreClient.java:737)
at 
org.apache.hadoop.fs.adl.AdlFileSystem.getFileStatus(AdlFileSystem.java:488)
at 
org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:382)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:274)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:170)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:535)
at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:545)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:308)
at org.apache.hadoop.util.RunJar.main(RunJar.java:222)
{code}


> ADL To map container 403 to AccessDeniedException
> -
>
> Key: HADOOP-13982
> URL: https://issues.apache.org/jira/browse/HADOOP-13982
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Atul Sikaria
>Priority: Major
>  Labels: supportability
>
> The error message when accessing a store without permission is not user 
> friendly:
> {noformat}
> $ hdfs dfs -ls adl://STORE.azuredatalakestore.net/
> ls: Operation GETFILESTATUS failed with HTTP403 : null
> {noformat}
> Store {{STORE}} exists but Hadoop is configured with an SPI that does not 
> have access to the store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14452) Add hadoop-aliyun to cloud storage module

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14452:

Summary: Add hadoop-aliyun to cloud storage module  (was: Add adl and 
aliyun to cloud storage module)

> Add hadoop-aliyun to cloud storage module
> -
>
> Key: HADOOP-14452
> URL: https://issues.apache.org/jira/browse/HADOOP-14452
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> HADOOP-13687 added then existing cloud connector file systems: aws, azure and 
> openstack to a new module hadoop-cloud-storage. azure data lake and aliyun 
> were not included in it.
> I think we should add them too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425739#comment-16425739
 ] 

Steve Loughran commented on HADOOP-15361:
-

seems good, though there's still some funnies about Windows ::MoveFile(), which 
will switch to a copy if the dest is on a different volume.

* revert the import .* changes; maybe look at your IDE settings there to keep 
that down.
* in {{TestRawLocalFileSystemContract}} some new tests. Are they common to all 
filesystems? If so I'd like them in {{AbstractContractRenameTest}}, though that 
will force you to test against the object stores too, I'm afraid.

The compatibility is the troublespot here. How does it relate to what we have 
in filesystem.md? 

The normative behaviour we want is that of HDFS. If you put the new tests in 
{{AbstractContractRenameTest}}, the HDFS subclass {{TestHDFSContractRename}} 
must pass them. If it doesn't, that's a problem in the tests or the changed 
behaviour.

I'm less worried about backwards compatibility with RawLocal than I am with 
consistency with HDFS, because that's what we try to do everywhere: make things 
work like that, except for the bits that don't. 

Bear in mind that posix rename() is slightly different from HDFS rename() w.r.t 
empty directories. That is believed to be an accidental mistake in the HDFS 
behaviour that we are all stuck with. So if the outcome of the contract tests 
are different, well, we will have to consider a new option to enable/disable in 
the test contract.

Now, what happens on windows?

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15346) S3ARetryPolicy for 400/BadArgument to be "fail"

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15346:

Attachment: HADOOP-15346-001.patch

> S3ARetryPolicy for 400/BadArgument to be "fail"
> ---
>
> Key: HADOOP-15346
> URL: https://issues.apache.org/jira/browse/HADOOP-15346
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15346-001.patch
>
>
> The retry policy for the AWS 400/BadArgument response is currently "treat as 
> a connectivity error" on the basis that sometimes it works again.
> It doesn't, not normally, and by using the connectivity retry policy, 
> unrecoverable failures can take time to surface.
> Proposed: switch to a fail fast policy for BadArgumentException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15346) S3ARetryPolicy for 400/BadArgument to be "fail"

2018-04-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15346:

Status: Patch Available  (was: Open)

test: s3 ireland

> S3ARetryPolicy for 400/BadArgument to be "fail"
> ---
>
> Key: HADOOP-15346
> URL: https://issues.apache.org/jira/browse/HADOOP-15346
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15346-001.patch
>
>
> The retry policy for the AWS 400/BadArgument response is currently "treat as 
> a connectivity error" on the basis that sometimes it works again.
> It doesn't, not normally, and by using the connectivity retry policy, 
> unrecoverable failures can take time to surface.
> Proposed: switch to a fail fast policy for BadArgumentException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15346) S3ARetryPolicy for 400/BadArgument to be "fail"

2018-04-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425917#comment-16425917
 ] 

Steve Loughran commented on HADOOP-15346:
-

Patch 001: move to fail fast on BR.

> S3ARetryPolicy for 400/BadArgument to be "fail"
> ---
>
> Key: HADOOP-15346
> URL: https://issues.apache.org/jira/browse/HADOOP-15346
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15346-001.patch
>
>
> The retry policy for the AWS 400/BadArgument response is currently "treat as 
> a connectivity error" on the basis that sometimes it works again.
> It doesn't, not normally, and by using the connectivity retry policy, 
> unrecoverable failures can take time to surface.
> Proposed: switch to a fail fast policy for BadArgumentException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-04 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Attachment: HADOOP-15362.1.patch

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15362) Review of Configuration.java

2018-04-04 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HADOOP-15362:


Assignee: BELUGA BEHR

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15362) Review of Configuration.java

2018-04-04 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15362:
-
Status: Patch Available  (was: Open)

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch
>
>
> * Various improvements
> * Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15363) (transient) ITestS3AInconsistency.testOpenFailOnRead S3Guard failure

2018-04-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425938#comment-16425938
 ] 

Steve Loughran commented on HADOOP-15363:
-

Stack. Could not replicate on 2x followups. Surfaced during a heavy-load 
parallel test run BTW.
{code}
[ERROR] testOpenFailOnRead(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
Time elapsed: 9.95 s  <<< FAILURE!
java.lang.AssertionError: S3Guard failed to handle fail-on-read
at 
org.apache.hadoop.fs.contract.ContractTestUtils.fail(ContractTestUtils.java:528)
at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.doOpenFailOnReadTest(ITestS3AInconsistency.java:185)
at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testOpenFailOnRead(ITestS3AInconsistency.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.io.FileNotFoundException: read(b, 0, 4) on key 
fork-0002/test/ancestor/file-to-read-DELAY_LISTING_ME failed: injecting error 
51/100 for test.
at 
org.apache.hadoop.fs.s3a.InconsistentS3Object.readFailpoint(InconsistentS3Object.java:158)
at 
org.apache.hadoop.fs.s3a.InconsistentS3Object.access$200(InconsistentS3Object.java:39)
at 
org.apache.hadoop.fs.s3a.InconsistentS3Object$InconsistentS3InputStream.read(InconsistentS3Object.java:227)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:451)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:441)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.doOpenFailOnReadTest(ITestS3AInconsistency.java:178)
... 13 more
{code}

> (transient) ITestS3AInconsistency.testOpenFailOnRead S3Guard failure
> 
>
> Key: HADOOP-15363
> URL: https://issues.apache.org/jira/browse/HADOOP-15363
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Test failure
> {code}
>   ITestS3AInconsistency.testOpenFailOnRead:162->doOpenFailOnReadTest:185 
> S3Guard failed to handle fail-on-read
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15362) Review of Configuration.java

2018-04-04 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-15362:


 Summary: Review of Configuration.java
 Key: HADOOP-15362
 URL: https://issues.apache.org/jira/browse/HADOOP-15362
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.0.0
Reporter: BELUGA BEHR


* Various improvements
* Fix a lot of checks style errors



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425825#comment-16425825
 ] 

genericqa commented on HADOOP-15361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 68 unchanged - 1 fixed = 70 total (was 69) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestChecksumFileSystem |
|   | hadoop.crypto.key.TestKeyProviderFactory |
|   | hadoop.crypto.key.TestKeyShell |
|   | hadoop.fs.viewfs.TestViewFsTrash |
|   | hadoop.fs.contract.localfs.TestLocalFSContractRename |
|   | hadoop.fs.TestLocalFileSystemPermission |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917549/HADOOP-15361.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d93c94b4fe0 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b779f4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14431/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425907#comment-16425907
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

Rushabh S Shah's +1 is effectively a binding vote.
For the record here's my +1 too (pending the checkstyle fix)

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15363) (transient) ITestS3AInconsistency.testOpenFailOnRead S3Guard failure

2018-04-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15363:
---

 Summary: (transient) ITestS3AInconsistency.testOpenFailOnRead 
S3Guard failure
 Key: HADOOP-15363
 URL: https://issues.apache.org/jira/browse/HADOOP-15363
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


Test failure
{code}
  ITestS3AInconsistency.testOpenFailOnRead:162->doOpenFailOnReadTest:185 
S3Guard failed to handle fail-on-read
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425973#comment-16425973
 ] 

Larry McCay commented on HADOOP-15357:
--

Hi [~Jim_Brennan] - the patch looks good.

I notice that you decided to not use the get() method itself was this due to 
thinking that we don't need the handleDeprecation stuff?

If so, why do you think we don't need it?

I'm not saying that we do but would like to understand why you think we don't.

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15346) S3ARetryPolicy for 400/BadArgument to be "fail"

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426031#comment-16426031
 ] 

genericqa commented on HADOOP-15346:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
32s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15346 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917584/HADOOP-15346-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3363f73882f6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7853ec8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14434/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14434/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3ARetryPolicy for 400/BadArgument to be "fail"
> ---
>
> Key: HADOOP-15346
> URL: 

[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-04 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426069#comment-16426069
 ] 

Jim Brennan commented on HADOOP-15357:
--

[~lmccay], thanks for the review!
{quote}I notice that you decided to not use the get() method itself was this 
due to thinking that we don't need the handleDeprecation stuff?

If so, why do you think we don't need it?
{quote}
I was thinking that the variables that this function finds are not likely to be 
known constants, and as such cannot be in the deprecated map.   Only the 
prefixed portion is necessarily a known constant.  So it seemed like wasted 
effort to do the deprecated handling.   e.g.: with {{my_prefix[var]=[value]}}, 
would we ever find {{my_prefix[var]}} in the deprecated map?

Looking at it again, I'm not convinced by my own reasoning - I was thinking of 
the MAPREDUCE-7069 use-case, but if this function is used to find a set of 
known properties with a common prefix, it is certainly possible that they could 
require deprecation handling.

I'm inclined to put up a new patch changing this to use get().  Let me know if 
you disagree.

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15362) Review of Configuration.java

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426071#comment-16426071
 ] 

genericqa commented on HADOOP-15362:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 61 unchanged - 77 fixed = 64 total (was 138) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 9 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestLdapGroupsMapping |
|   | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.security.TestSecurityUtil |
|   | hadoop.security.alias.TestCredentialProviderFactory |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.conf.TestConfiguration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917581/HADOOP-15362.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba65aaac4fbf 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 42cd367 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | 

[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries

2018-04-04 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426073#comment-16426073
 ] 

Aaron Fabbri commented on HADOOP-14759:
---

Thanks for the updated patch [~gabor.bota].  I will try to test and review this 
today.

> S3GuardTool prune to prune specific bucket entries
> --
>
> Key: HADOOP-14759
> URL: https://issues.apache.org/jira/browse/HADOOP-14759
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, 
> HADOOP-14759.003.patch, HADOOP-14759.004.patch, HADOOP-14759.005.patch, 
> HADOOP-14759.006.patch
>
>
> Users may think that when you provide a URI to a bucket, you are pruning all 
> entries in the table *for that bucket*. In fact you are purging all entries 
> across all buckets in the table:
> {code}
> hadoop s3guard prune -days 7 s3a://ireland-1
> {code}
> It should be restricted to that bucket, unless you specify otherwise
> +maybe also add a hard date rather than a relative one



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426094#comment-16426094
 ] 

Larry McCay commented on HADOOP-15357:
--

I don't disagree - that is not to say that I know that we need it but it is 
certainly safer to support it.

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15357) Configuration.getPropsWithPrefix no longer does variable substitution

2018-04-04 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-15357:
-
Attachment: HADOOP-15357.003.patch

> Configuration.getPropsWithPrefix no longer does variable substitution
> -
>
> Key: HADOOP-15357
> URL: https://issues.apache.org/jira/browse/HADOOP-15357
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-15357.001.patch, HADOOP-15357.002.patch, 
> HADOOP-15357.003.patch
>
>
> Before [HADOOP-13556], Configuration.getPropsWithPrefix() used the 
> Configuration.get() method to get the value of the variables.   After 
> [HADOOP-13556], it now uses props.getProperty().
> The difference is that Configuration.get() does deprecation handling and more 
> importantly variable substitution on the value.  So if a property has a 
> variable specified with ${variable_name}, it will no longer be expanded when 
> retrieved via getPropsWithPrefix().
> Was this change in behavior intentional?  I am using this function in the fix 
> for [MAPREDUCE-7069], but we do want variable expansion to happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15360) Log some more helpful information when catch RuntimeException or Error in IPC.Server

2018-04-04 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425194#comment-16425194
 ] 

Wei-Chiu Chuang commented on HADOOP-15360:
--

FYI the AIOOBE is most likely fixed by HDFS-11755 + HDFS-11445

> Log some more helpful information when catch RuntimeException or Error in 
> IPC.Server 
> -
>
> Key: HADOOP-15360
> URL: https://issues.apache.org/jira/browse/HADOOP-15360
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: He Xiaoqiao
>Priority: Major
>
> IPC.Server#logException doesn't not print exception stack trace when catch 
> RuntimeException or Error, for instance:
> {code:java}
> 2018-03-28 21:52:25,385 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 17 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo 
> from *.*.*.*:59326 Call#46 Retry#0 java.lang.ArrayIndexOutOfBoundsException: 0
> {code}
> this log message is not friendly for debug. I think it is necessary to print 
> more helpful message or full stack trace when the exception is 
> RuntimeException or Error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15360) Log some more helpful information when catch RuntimeException or Error in IPC.Server

2018-04-04 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425250#comment-16425250
 ] 

He Xiaoqiao commented on HADOOP-15360:
--

Thanks [~jojochuang], my hadoop version is release-2.7.1, and it looks fix in 
branch-2.7,  but I don't dig deeply which issue fixed it, maybe it is relevant 
to JIRA you mentioned above. thanks again.

> Log some more helpful information when catch RuntimeException or Error in 
> IPC.Server 
> -
>
> Key: HADOOP-15360
> URL: https://issues.apache.org/jira/browse/HADOOP-15360
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: He Xiaoqiao
>Priority: Major
>
> IPC.Server#logException doesn't not print exception stack trace when catch 
> RuntimeException or Error, for instance:
> {code:java}
> 2018-03-28 21:52:25,385 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 17 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo 
> from *.*.*.*:59326 Call#46 Retry#0 java.lang.ArrayIndexOutOfBoundsException: 0
> {code}
> this log message is not friendly for debug. I think it is necessary to print 
> more helpful message or full stack trace when the exception is 
> RuntimeException or Error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org