[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus s"directories only" scan still does a HEAD

2019-10-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16635:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> S3A innerGetFileStatus s"directories only" scan still does a HEAD
> -
>
> Key: HADOOP-16635
> URL: https://issues.apache.org/jira/browse/HADOOP-16635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
>
> The patch in HADOOP-16490 is incomplete: we are still checking for the Head 
> of each object, even though we only wanted the directory checks. As a result, 
> createFile is still vulnerable to 404 caching on unguarded S3 repos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951099#comment-16951099
 ] 

Steve Loughran commented on HADOOP-15870:
-

Okay I reverted the patch. We can decide what to do at our leisure.

I'm thinking we may need both of

* Fix  WebHDFSInputStream.available()
* Allow FS contracts to skip those probes (for downstream uses)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951089#comment-16951089
 ] 

Steve Loughran commented on HADOOP-15870:
-

FWIW this is showing that WebHDFSInputStream.available() is always 0. To be 
purist, it should be forwarding the probe all the way to the input stream. So 
after a revert we could actually fix that

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-15870:
-

reoopening. Two options: revert or fix. I'm generally a fix-forward person for 
just test failures.

I have no time to spare this week as I'm travelling. Reverting may be best for 
now.

At least we know what extra tests to run! I did try to run the ones I knew 
about including HDFS and Azure, but must have missed this



> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2019-10-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951076#comment-16951076
 ] 

Steve Loughran commented on HADOOP-13223:
-

You need to upgrade to version of snappy to deal with new platforms e.g. arm-64.

Pure NIO would be best. It would also be much better in testing, where it is 
near impossible to get that native library on the CP.

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>Priority: Major
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus s"directories only" scan still does a HEAD

2019-10-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16635:

Summary: S3A innerGetFileStatus s"directories only" scan still does a HEAD  
(was: S3A innerGetFileStatus scans for directories-only still does a HEAD)

> S3A innerGetFileStatus s"directories only" scan still does a HEAD
> -
>
> Key: HADOOP-16635
> URL: https://issues.apache.org/jira/browse/HADOOP-16635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> The patch in HADOOP-16490 is incomplete: we are still checking for the Head 
> of each object, even though we only wanted the directory checks. As a result, 
> createFile is still vulnerable to 404 caching on unguarded S3 repos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16653) S3Guard DDB overreacts to no tag access

2019-10-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950899#comment-16950899
 ] 

Steve Loughran commented on HADOOP-16653:
-

Certainly on read access denied, I'd like to see : silence and no attempt to 
update.

What about the sequence: read tag, tag, notfound, attempt write? Let's make 
that an info not a warning. Warnings create support calls

> S3Guard DDB overreacts to no tag access
> ---
>
> Key: HADOOP-16653
> URL: https://issues.apache.org/jira/browse/HADOOP-16653
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> if you don't have permissions to read or write DDB tags it logs a lot every 
> time you bring up a guarded FS
> # we shouldn't worry so much about no tag access if version is there
> # if you can't read the tag, no point trying to write



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16653) S3Guard DDB overreacts to no tag access

2019-10-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950897#comment-16950897
 ] 

Steve Loughran commented on HADOOP-16653:
-

Log

{code}
2019-10-14 11:22:44,587 [JUnit-testRestrictDDBTagAccess] WARN  
s3guard.DynamoDBMetadataStoreTableManager 
(DynamoDBMetadataStoreTableManager.java:getVersionMarkerFromTags(255)) - 
Exception while getting tags from the dynamo table: User: 
arn:aws:sts::980678866538:assumed-role/stevel-s3guard/test is not authorized to 
perform: dynamodb:ListTagsOfResource on resource: 
arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request 
ID: P9V270FPO034B5E55QLRCJK8UVVV4KQNSO5AEMVJF66Q9ASUAAJG)
2019-10-14 11:22:44,587 [JUnit-testRestrictDDBTagAccess] INFO  
s3guard.DynamoDBMetadataStoreTableManager 
(DynamoDBMetadataStoreTableManager.java:verifyVersionCompatibility(417)) - 
Table hwdev-steve-ireland-new contains no version marker TAG but contains 
compatible version marker ITEM. Restoring the version marker item from item.
2019-10-14 11:22:44,622 [JUnit-testRestrictDDBTagAccess] WARN  
s3guard.DynamoDBMetadataStoreTableManager 
(DynamoDBMetadataStoreTableManager.java:tagTableWithVersionMarker(238)) - 
Exception during tagging table: User: 
arn:aws:sts::980678866538:assumed-role/stevel-s3guard/test is not authorized to 
perform: dynamodb:TagResource on resource: 
arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request 
ID: 
{code}

> S3Guard DDB overreacts to no tag access
> ---
>
> Key: HADOOP-16653
> URL: https://issues.apache.org/jira/browse/HADOOP-16653
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> if you don't have permissions to read or write DDB tags it logs a lot every 
> time you bring up a guarded FS
> # we shouldn't worry so much about no tag access if version is there
> # if you can't read the tag, no point trying to write



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16653) S3Guard DDB overreacts to no tag access

2019-10-14 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16653:
---

 Summary: S3Guard DDB overreacts to no tag access
 Key: HADOOP-16653
 URL: https://issues.apache.org/jira/browse/HADOOP-16653
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Gabor Bota


if you don't have permissions to read or write DDB tags it logs a lot every 
time you bring up a guarded FS

# we shouldn't worry so much about no tag access if version is there
# if you can't read the tag, no point trying to write



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16642:

Status: Patch Available  (was: Open)

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16622) intermittent failure of ITestCommitOperations: too many s3guard writes

2019-10-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950340#comment-16950340
 ] 

Steve Loughran commented on HADOOP-16622:
-

HADOOP-16644 is going to get the modtime on file creation (small files) or 
issue a HEAD on the multipart uploads. If we can get in then the problem should 
go away. If it doesn't, there's something else lurking

Thanks for doing the testing -always good to stress different AWS regions, as 
they are not uniform in behaviour

> intermittent failure of ITestCommitOperations: too many s3guard writes
> --
>
> Key: HADOOP-16622
> URL: https://issues.apache.org/jira/browse/HADOOP-16622
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> intermittent failure of ITestCommitOperations; expected 2 s3guard writes, saw 
> 7
> the logged commit state shows that only two entries were added, so I'm not 
> sure what is up. Proposed: in HADOOP-16207 I will add s3guard.operations log 
> to debug so we get a trace of all DDB put/delete calls; this will let us 
> debug it when it surfaces again



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897291#comment-16897291
 ] 

Steve Loughran edited comment on HADOOP-16478 at 10/11/19 4:14 PM:
---

{code}
java.nio.file.AccessDeniedException:something: getBucketLocation() on 
s3a://restricted: com.amazonaws.services.s3.model.AmazonS3Exception: Access 
Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request 
ID: 030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:703)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:1185)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:999)
at 
com.amazonaws.services.s3.AmazonS3Client.getBucketLocation(AmazonS3Client.java:1005)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getBucketLocation$3(S3AFileSystem.java:717)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 11 more
{code}


was (Author: ste...@apache.org):
{code}
java.nio.file.AccessDeniedException: mow-dev-istio-west-demo: 
getBucketLocation() on s3a://restricted: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
030653A1119B53A7; S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=), 
S3 Extended Request ID: 
lmr6jNHSrfpvjcuyJP4D0wovmqnfFVrnHOQNQD9SXV6ZVTF7eF5IHddEXnUtp2STMvxc7PySzkw=:AccessDenied
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getBucketLocation(S3AFileSystem.java:716)
at 

[jira] [Updated] (HADOOP-16645) S3A Delegation Token extension point to use StoreContext

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16645:

Status: Patch Available  (was: Open)

> S3A Delegation Token extension point to use StoreContext
> 
>
> Key: HADOOP-16645
> URL: https://issues.apache.org/jira/browse/HADOOP-16645
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Move the S3A DT code from HADOOP-14556 to take a StoreContext ref in its 
> ctor, over a S3AFileSystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16613) s3a to set fake directory marker contentType to application/x-directory

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949553#comment-16949553
 ] 

Steve Loughran commented on HADOOP-16613:
-

# should we ourselves say content-type == application/x-directory means it is a 
dir, irrespective of len
# how to react to something without a / which says it is an x-directory?

> s3a to set fake directory marker contentType to application/x-directory
> ---
>
> Key: HADOOP-16613
> URL: https://issues.apache.org/jira/browse/HADOOP-16613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Jose Torres
>Priority: Minor
>
> S3AFileSystem doesn't set a contentType for fake directory files, causing it 
> to be inferred as "application/octet-stream". But fake directory files 
> created through the S3 web console have content type 
> "application/x-directory". We may want to adopt the web console behavior as a 
> standard, since some systems will rely on content type and not size + 
> trailing slash to determine if an object represents a directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16607) s3a attempts to look up password/encryption fail if JCEKS file unreadable

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16607.
-
Resolution: Duplicate

> s3a attempts to look up password/encryption fail if JCEKS file unreadable
> -
>
> Key: HADOOP-16607
> URL: https://issues.apache.org/jira/browse/HADOOP-16607
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, security
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Priority: Minor
>
> Hive deployments can use a JCEKs file to store secrets, which it sets up
> To be readable only by the Hive user, listing it under 
> hadoop.credential.providers.
> When it tries to create an S3A FS instance as another user, via a doAs{}
> clause, the S3A FS getPassword() call fails on the subsequent 
> AccessDeniedException -even if the secret it is looking for is in the XML file
> or, as in the case of encryption settings, or session key undefined.
> I can you point the blame at hive for this -it's the one with a forbidden 
> JCEKS file on the provider path, but I think it is easiest to fix in S3AUtils 
> than
> in hive, and safer then changing Configuration.
> ABFS is likely to see the same problem.
> I propose an option to set the fallback policy.
> I initially thought about always handling this:
> Catching the exception, attempting to downgrade to Reading XML and if that 
> fails rethrowing the caught exception.
> However, this will do the wrong thing if the option is completely undefined,
> As is common with the encryption settings.
> I don't want to simply default to log and continue here though, as it may be 
> a legitimate failure -such as when you really do want to read secrets from 
> such a source.
>  Issue: what fallback policies?
>  
>  * fail: fail fast. today's policy; the default.
>  * ignore: log and continue
>  
>  We could try and be clever in future. To get away with that, we would have 
> to declare which options were considered compulsory and re-throw the caught
>  Exception if no value was found in the XML file.
>  
>  That can be a future enhancement -but it is why I want the policy to be an 
> enumeration, rather than a simple boolean.
>  
>  Tests: should be straightforward; set hadoop.credential.providers to a 
> non-existent file and expected to be processed according to the settings.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16478:

Status: Patch Available  (was: Open)

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16478:
---

Assignee: Steve Loughran

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16324) S3A Delegation Token code to spell "Marshalled" as Marshaled

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949534#comment-16949534
 ] 

Steve Loughran commented on HADOOP-16324:
-

I'm doing this in the HADOOP-16645 PR as that's backwards incompatible too -for 
this to go in it'll need co-ordination with those people who are using the 
current release (sorry!)

> S3A Delegation Token code to spell "Marshalled" as Marshaled
> 
>
> Key: HADOOP-16324
> URL: https://issues.apache.org/jira/browse/HADOOP-16324
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> Apparently {{MarshalledCredentials}} is the EN_UK locality spelling; the 
> EN_US one is {{Marshaled}}. Fix in code and docs before anything ships, 
> because those classes do end up being used by all external implementations of 
> S3A Delegation Tokens.
> I am grateful to [~rlevas] for pointing out the error of my ways.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Resolution: Done
Status: Resolved  (was: Patch Available)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch, 
> HADOOP-15870-008.patch, HADOOP-15920-06.patch, HADOOP-15920-07.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16644) Retrive modtime of PUT file from store, via response or HEAD

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949514#comment-16949514
 ] 

Steve Loughran commented on HADOOP-16644:
-

Rummaging around the open JIRAs,HADOOP-16176 proposes adding more tests, and 
highlights that the modtime of a multipart upload may be that of the start 
time, not the end time. So for a big put, it will be wy off.  that HEAD 
sounds critical there

> Retrive modtime of PUT file from store, via response or HEAD
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16164) S3aDelegationTokens to add accessor for tests to get at the token binding

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16164:
---

Assignee: (was: Steve Loughran)

> S3aDelegationTokens to add accessor for tests to get at the token binding
> -
>
> Key: HADOOP-16164
> URL: https://issues.apache.org/jira/browse/HADOOP-16164
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Critical
>
> For testing, it turns out to be useful to get at the current token binding in 
> the S3ADelegationTokens instance of a filesystem.
> provide an accessor, tagged as for testing only



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949512#comment-16949512
 ] 

Steve Loughran commented on HADOOP-15961:
-

hey, 
now I've finally got the available patch in, let's wrap this one up too. Can 
you start with a github PR off trunk? thanks

> S3A committers: make sure there's regular progress() calls
> --
>
> Key: HADOOP-15961
> URL: https://issues.apache.org/jira/browse/HADOOP-15961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-15961-001.patch, HADOOP-15961-002.patch, 
> HADOOP-15961-003.patch
>
>
> MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
> callbacks are needed, just for HDFS.
> the S3A committers should be reviewed similarly.
> At a glance:
> StagingCommitter.commitTaskInternal() is at risk if a task write upload 
> enough data to the localfs that the upload takes longer than the timeout.
> it should call progress it every single file commits, or better: modify 
> {{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
> after every part upload.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus scans for directories-only still does a HEAD

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16635:

Status: Patch Available  (was: Open)

> S3A innerGetFileStatus scans for directories-only still does a HEAD
> ---
>
> Key: HADOOP-16635
> URL: https://issues.apache.org/jira/browse/HADOOP-16635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> The patch in HADOOP-16490 is incomplete: we are still checking for the Head 
> of each object, even though we only wanted the directory checks. As a result, 
> createFile is still vulnerable to 404 caching on unguarded S3 repos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16632) Speculating & Partitioned S3A magic committers can leave pending files under __magic

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16632:

Status: Patch Available  (was: Open)

> Speculating & Partitioned S3A magic committers can leave pending files under 
> __magic
> 
>
> Key: HADOOP-16632
> URL: https://issues.apache.org/jira/browse/HADOOP-16632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.3, 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Partitioned S3A magic committers can leaving pending files, maybe upload data
> This surfaced in an assertion failure on a parallel test run.
> I thought it was actually a test failure, but with HADOOP-16207 all the docs 
> are preserved in the local FS and I can understand what happened.
> h3. Junit process
> {code}
> [INFO] 
> [ERROR] Failures: 
> [ERROR] 
> ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356
>  Expected a java.io.FileNotFoundException to be thrown, but got the result: : 
> "Found magic dir which should have been deleted at 
> S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic;
>  isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
> group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
> isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
> versionId=null
> [s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
> s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending
> {code}
> Full details to follow in the comment as they are, well, detailed.
>  
> Key point: AM-side job and task cleanup can happen before the worker task 
> finishes its writes. This will result in files under __magic. It may result 
> in pending uploads too -but only if the write began after the AM job cleanup 
> did a list + abort of all pending uploads under the destination directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16651) S3 getBucketLocation() can return "US" for us-east

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16651:

Summary: S3 getBucketLocation() can return "US" for us-east  (was: s3 
getBucketLocation() can return "US" for us-east)

> S3 getBucketLocation() can return "US" for us-east
> --
>
> Key: HADOOP-16651
> URL: https://issues.apache.org/jira/browse/HADOOP-16651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Priority: Major
>
> see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0
> apparently getBucketLocation can return US for a region when it is really 
> us-east-1
> this confuses DDB region calculation, which needs the us-east value.
> proposed: change it in S3AFS.getBucketLocation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16651) S3 getBucketLocation() can return "US" for us-east

2019-10-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16651:
---

Assignee: Steve Loughran

> S3 getBucketLocation() can return "US" for us-east
> --
>
> Key: HADOOP-16651
> URL: https://issues.apache.org/jira/browse/HADOOP-16651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0
> apparently getBucketLocation can return US for a region when it is really 
> us-east-1
> this confuses DDB region calculation, which needs the us-east value.
> proposed: change it in S3AFS.getBucketLocation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16651) s3 getBucketLocation() can return "US" for us-east

2019-10-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949511#comment-16949511
 ] 

Steve Loughran commented on HADOOP-16651:
-

including this in the HADOOP-16478 patch, pulling up code in HADOOP-16599

> s3 getBucketLocation() can return "US" for us-east
> --
>
> Key: HADOOP-16651
> URL: https://issues.apache.org/jira/browse/HADOOP-16651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Priority: Major
>
> see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0
> apparently getBucketLocation can return US for a region when it is really 
> us-east-1
> this confuses DDB region calculation, which needs the us-east value.
> proposed: change it in S3AFS.getBucketLocation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16651) s3 getBucketLocation() can return "US" for us-east

2019-10-11 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16651:
---

 Summary: s3 getBucketLocation() can return "US" for us-east
 Key: HADOOP-16651
 URL: https://issues.apache.org/jira/browse/HADOOP-16651
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.3, 3.2.1
Reporter: Steve Loughran


see: https://forums.aws.amazon.com/thread.jspa?messageID=796829=0

apparently getBucketLocation can return US for a region when it is really 
us-east-1

this confuses DDB region calculation, which needs the us-east value.

proposed: change it in S3AFS.getBucketLocation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~Jack-Lee]- merged the final PR in; thanks for this. I'll also add it to the 
list of things to backport to 3.2

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16650) ITestS3AClosedFS failing -junit test thread

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16650.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> ITestS3AClosedFS failing -junit test thread
> ---
>
> Key: HADOOP-16650
> URL: https://issues.apache.org/jira/browse/HADOOP-16650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
>
> The new thread leak test in HADOOP-16570 is failing for me in test runs; need 
> to strip out all Junit-* threads for the filter to be reliable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948747#comment-16948747
 ] 

Steve Loughran commented on HADOOP-16649:
-

yes -the hard part is coming up with a test; look in 
hadoop-common-project/hadoop-common/src/test/scripts for what there is already

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
> DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16650) ITestS3AClosedFS failing -junit test thread

2019-10-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948556#comment-16948556
 ] 

Steve Loughran commented on HADOOP-16650:
-

{code}
[ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.626 s 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AClosedFS
[ERROR] org.apache.hadoop.fs.s3a.ITestS3AClosedFS  Time elapsed: 0.485 s  <<< 
FAILURE!
java.lang.AssertionError: 
[The threads at the end of the test run] 
Expecting :
 <["Finalizer",
"JUnit",
"JUnit-testClosedOpen",
"MutableQuantiles-0",
"Reference Handler",
"Signal Dispatcher",
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner",
"process reaper",
"surefire-forkedjvm-command-thread",
"surefire-forkedjvm-ping-30s"]>
to be subset of
 <["Attach Listener",
"Finalizer",
"JUnit",
"MutableQuantiles-0",
"Reference Handler",
"Signal Dispatcher",
"java-sdk-http-connection-reaper",
"java-sdk-progress-listener-callback-thread",
"org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner",
"process reaper",
"surefire-forkedjvm-command-thread",
"surefire-forkedjvm-ping-30s"]>
but found these extra elements:
 <["JUnit-testClosedOpen"]>
at 
org.apache.hadoop.fs.s3a.ITestS3AClosedFS.checkForThreadLeakage(ITestS3AClosedFS.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}

> ITestS3AClosedFS failing -junit test thread
> ---
>
> Key: HADOOP-16650
> URL: https://issues.apache.org/jira/browse/HADOOP-16650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> The new thread leak test in HADOOP-16570 is failing for me in test runs; need 
> to strip out all Junit-* threads for the filter to be reliable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16650) ITestS3AClosedFS failing -junit test thread

2019-10-10 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16650:
---

 Summary: ITestS3AClosedFS failing -junit test thread
 Key: HADOOP-16650
 URL: https://issues.apache.org/jira/browse/HADOOP-16650
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The new thread leak test in HADOOP-16570 is failing for me in test runs; need 
to strip out all Junit-* threads for the filter to be reliable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948416#comment-16948416
 ] 

Steve Loughran commented on HADOOP-16478:
-

While I go near this command, I'd like to also list the auth directories for a 
bucket, as it is now a bit trickier to work out what is going on.

This command is proving to be step one in understanding s3guard related issues 
-it needs to be complete.

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948390#comment-16948390
 ] 

Steve Loughran commented on HADOOP-16649:
-

looks like a bug in {{hadoop_add_param}}; its looking for a specific string, 
and rejecting things.

Someone who understands those scripts is going to have to help here

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
> DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16649:

Description: 
When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get ignored.

eg setting this:

HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"

 

 with debug on:

 

DEBUG: Profiles: importing 
/opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES 
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure

 

whereas:

 

HADOOP_OPTIONAL_TOOLS="hadoop-azure"

 

 with debug on:


DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure

 

  was:
When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get ignored.

eg setting this:

HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"

 

 with debug on:

 

DEBUG: Profiles: importing 
/opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES 
 hadoop-azure

 

whereas:

 

HADOOP_OPTIONAL_TOOLS="hadoop-azure"

 

 with debug on:


DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure

 


> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
> DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16649:

Description: 
When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get ignored.

eg setting this:

HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"

 

 with debug on:

 

DEBUG: Profiles: importing 
/opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES 
 hadoop-azure

 

whereas:

 

HADOOP_OPTIONAL_TOOLS="hadoop-azure"

 

 with debug on:


DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure

 

  was:
When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get ignored.

eg setting this:

HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"

 

 with debug on:

 

DEBUG: Profiles: importing 
/opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES declined hadoop-azure

 

whereas:

 

HADOOP_OPTIONAL_TOOLS="hadoop-azure"

 

 with debug on:


DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure

 


> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, fs
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
>  hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16649) Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will ignore hadoop-azure

2019-10-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16649:

Component/s: (was: fs)
 (was: conf)
 bin

> Defining hadoop-azure and hadoop-azure-datalake in HADOOP_OPTIONAL_TOOLS will 
> ignore hadoop-azure
> -
>
> Key: HADOOP-16649
> URL: https://issues.apache.org/jira/browse/HADOOP-16649
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 3.2.1
> Environment: Shell, but it also trickles down into all code using 
> `FileSystem` 
>Reporter: Tom Lous
>Priority: Minor
>
> When defining both `hadoop-azure` and `hadoop-azure-datalake` in 
> HADOOP_OPTIONAL_TOOLS in `conf/hadoop-env.sh`, `hadoop-azure` will get 
> ignored.
> eg setting this:
> HADOOP_OPTIONAL_TOOLS="hadoop-azure-datalake,hadoop-azure"
>  
>  with debug on:
>  
> DEBUG: Profiles: importing 
> /opt/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure-datalake
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES 
>  hadoop-azure
>  
> whereas:
>  
> HADOOP_OPTIONAL_TOOLS="hadoop-azure"
>  
>  with debug on:
> DEBUG: Profiles: importing /opt/hadoop/libexec/shellprofile.d/hadoop-azure.sh
> DEBUG: HADOOP_SHELL_PROFILES accepted hadoop-azure
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16648) HDFS Native Client does not build correctly

2019-10-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948386#comment-16948386
 ] 

Steve Loughran commented on HADOOP-16648:
-

[~rajesh.balamohan] -have a look at the HDFS patch and see if it fixes this; if 
not we should leave it open

> HDFS Native Client does not build correctly
> ---
>
> Key: HADOOP-16648
> URL: https://issues.apache.org/jira/browse/HADOOP-16648
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 3.3.0
>Reporter: Rajesh Balamohan
>Priority: Blocker
>
> Builds are failing in PR with following exception in native client.  
> {noformat}
> [WARNING] make[2]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
>   2 3 4 5 6 7 8 9 10 11
> [WARNING] [ 28%] Built target common_obj
> [WARNING] make[2]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles
>   31
> [WARNING] [ 28%] Built target gmock_main_obj
> [WARNING] make[1]: Leaving directory 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/target'
> [WARNING] Makefile:127: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1591/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/proto/PROTOBUF_PROTOC_EXECUTABLE-NOTFOUND',
>  needed by 'main/native/libhdfspp/lib/proto/ClientNamenodeProtocol.hrpc.inl'. 
>  Stop.
> [WARNING] make[1]: *** 
> [main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [  0.301 
> s]
> [INFO] Apache Hadoop Build Tools .. SUCCESS [  1.348 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.501 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  1.391 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  0.115 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.168 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  4.490 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.773 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  7.922 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  1.381 
> s]
> [INFO] Apache Hadoop Common ... SUCCESS [ 34.562 
> s]
> [INFO] Apache Hadoop NFS .. SUCCESS [  5.583 
> s]
> [INFO] Apache Hadoop KMS .. SUCCESS [  5.931 
> s]
> [INFO] Apache Hadoop Registry . SUCCESS [  5.816 
> s]
> [INFO] Apache Hadoop Common Project ... SUCCESS [  0.056 
> s]
> [INFO] Apache Hadoop HDFS Client .. SUCCESS [ 27.104 
> s]
> [INFO] Apache Hadoop HDFS . SUCCESS [ 42.065 
> s]
> [INFO] Apache Hadoop HDFS Native Client ... FAILURE [ 19.349 
> s]
> {noformat}
> Creating this ticket, as couple of pull requests had the same issue.
> e.g 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1614/1/artifact/out/patch-compile-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16646) Backport S3A enhancements and fixes from trunk to branch-3.2

2019-10-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16646 started by Steve Loughran.
---
> Backport S3A enhancements and fixes from trunk to branch-3.2
> 
>
> Key: HADOOP-16646
> URL: https://issues.apache.org/jira/browse/HADOOP-16646
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport all the stable features from hadoop-aws in trunk to branch-3.2
> Note. we've already pulled most of these into CDP, so they have had 
> integration testing already done, though there may be some differences in 
> dependencies (guava, mockito etc)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16646) Backport S3A enhancements and fixes from trunk to branch-3.2

2019-10-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947870#comment-16947870
 ] 

Steve Loughran commented on HADOOP-16646:
-

Planned process

# get branch-3.2 building 
# see the current state of the 3.2 tests. I think they get a bit confused with 
test buckets which are size 0/PAYG; I will ignore those failures as one of the 
patches and the SDK update addresses that..
# cherry pick each patch in order; rebuild and do a unit test run on each
# plus those test cases changed
# and every few CPs do a full test run
# then merge

+repeat

Now, I actually have backported a lot of these to branch-3.1 based code (CDP, 
HDP), including fixup for mockito changes. I could pull those specific fixes 
in, though I'd rather actually upgrade mockito in branch-3.2



> Backport S3A enhancements and fixes from trunk to branch-3.2
> 
>
> Key: HADOOP-16646
> URL: https://issues.apache.org/jira/browse/HADOOP-16646
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport all the stable features from hadoop-aws in trunk to branch-3.2
> Note. we've already pulled most of these into CDP, so they have had 
> integration testing already done, though there may be some differences in 
> dependencies (guava, mockito etc)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16646) Backport S3A enhancements and fixes from trunk to branch-3.2

2019-10-09 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16646:
---

 Summary: Backport S3A enhancements and fixes from trunk to 
branch-3.2
 Key: HADOOP-16646
 URL: https://issues.apache.org/jira/browse/HADOOP-16646
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, fs/s3
Affects Versions: 3.2.1
Reporter: Steve Loughran
Assignee: Steve Loughran


Backport all the stable features from hadoop-aws in trunk to branch-3.2

Note. we've already pulled most of these into CDP, so they have had integration 
testing already done, though there may be some differences in dependencies 
(guava, mockito etc)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16645) S3A Delegation Token extension point to use StoreContext

2019-10-09 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16645:
---

 Summary: S3A Delegation Token extension point to use StoreContext
 Key: HADOOP-16645
 URL: https://issues.apache.org/jira/browse/HADOOP-16645
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Move the S3A DT code from HADOOP-14556 to take a StoreContext ref in its ctor, 
over a S3AFileSystem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16644) Retrive modtime of PUT file from store, via response or HEAD

2019-10-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16644:

Summary: Retrive modtime of PUT file from store, via response or HEAD  
(was: Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences)

> Retrive modtime of PUT file from store, via response or HEAD
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16644) Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences

2019-10-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947525#comment-16947525
 ] 

Steve Loughran commented on HADOOP-16644:
-

yeah, I'd just seen that too, it comes back in the metadata. I just need to 
pass it in through the finishedWrite.

My initial PR always does the HEAD on a non-dir PUT; we can enhance that. 
There's a risk for overwrites the HEAD returns the previous version. If we have 
the version ID all is good, but if not we can use the etag to verify  we have 
the right value -we'd have to retry to get the new one. And as we know, those 
load balancers can cache for many seconds.

regarding localisation and credentials, see HADOOP-16233 -we have to mark the 
status entries as encrypted so the shared cache is not used (it checks for 
"world readable and ! encrypted for the shared cache). With that patch in, the 
localisation is done as the user, and uses their DT.

I believe that this will then use the jobconf -we would have to check.



> Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16642:

Summary: ITestDynamoDBMetadataStoreScale fails when throttled.  (was: 
ITestDynamoDBMetadataStoreScale failing as the error text does not match 
expectations)

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16642:
---

Assignee: Steve Loughran

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946990#comment-16946990
 ] 

Steve Loughran commented on HADOOP-16478:
-

metastore already does this, but reviewed the message and tuned both it and the 
exception.

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16615) Add password check for credential provider

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946968#comment-16946968
 ] 

Steve Loughran commented on HADOOP-16615:
-

looks good; there's some minor changes in the test code I'd like. 

Could you submit this as a github PR?

> Add password check for credential provider
> --
>
> Key: HADOOP-16615
> URL: https://issues.apache.org/jira/browse/HADOOP-16615
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: hong dongdong
>Priority: Major
> Attachments: HADOOP-16615.patch
>
>
> When we use hadoop credential provider to store password, we can not sure if 
> the password is the same as what we remembered.
> So, I think we need a check tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16644) Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946787#comment-16946787
 ] 

Steve Loughran commented on HADOOP-16644:
-

We really need a way of getting that FS timestamp off the store. I am 
"reluctant" to do it in a HEAD straight after the create, but it is the only 
way to guarantee consistency. Doing the head/update during the PUT would also 
address HADOOP-16412 (etag and version) and keep [~sseth] happy.

+![~gabor.bota], [~fabbri]

*we could always think about making that HEAD/PUT async, though that could lead 
to even more inconsistency pain.



> Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16644) Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946781#comment-16946781
 ] 

Steve Loughran commented on HADOOP-16644:
-

{code}
2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:processHeartbeat(1150)) - { 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
1570531828143, FILE, null } failed: Resource 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed 
on src filesystem (expected 1570531828143, was 1570531828000
java.io.IOException: Resource 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed 
on src filesystem (expected 1570531828143, was 1570531828000
at 
org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:273)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:248)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:241)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

{code}

> Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16644) Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences

2019-10-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16644:
---

 Summary: Intermittent failure of ITestS3ATerasortOnS3A: timestamp 
differences
 Key: HADOOP-16644
 URL: https://issues.apache.org/jira/browse/HADOOP-16644
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
 Environment: -Dparallel-tests -DtestsThreadCount=8 
-Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale

h2. Hypothesis:
the timestamp of the source file is being picked up from S3Guard, but when the 
NM does a getFileStatus call, a HEAD check is made -and this (due to the 
overloaded test system) is out of sync with the listing. S3Guard is updated, 
the corrected date returned and the localisation fails.


Reporter: Steve Loughran


Terasort of directory committer failing in resource localisaton -the 
partitions.lst file has a different TS from that expected

Happens under loaded integration tests (threads = 8; not standalone); non-auth 
s3guard

{code}
2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:processHeartbeat(1150)) - { 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
1570531828143, FILE, null } failed: Resource 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed 
on src filesystem (expected 1570531828143, was 1570531828000
java.io.IOException: Resource 
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed 
on src filesystem (expected 1570531828143, was 1570531828000
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16642) ITestDynamoDBMetadataStoreScale failing as the error text does not match expectations

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946678#comment-16946678
 ] 

Steve Loughran commented on HADOOP-16642:
-

The text we are looking for is actually one we create ourselves when we give up 
retrying.

don't seem to match any branch (not even my own), there are two possibilities 
here:

* the retry logic is no longer working.
* we have moved operations around so that the specific place whether throttling 
is occurring is not wrapped by the exception translation.

The stack traces show the invoker retry loop was involved, so I go with 
hypothesis #2.

It looks like the message is only added in {{retryBackoffOnBatchWrite}}; we are 
not using batched writes at the point where the failure occurred, hence: not 
wrapped.

Plan: stop looking for the text.

> ITestDynamoDBMetadataStoreScale failing as the error text does not match 
> expectations
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16642) ITestDynamoDBMetadataStoreScale failing as the error text does not match expectations

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946658#comment-16946658
 ] 

Steve Loughran commented on HADOOP-16642:
-

{code}
3:06:22 java.lang.AssertionError: 
13:06:22 Expected throttling message:  Expected to find ' This may be because 
the write threshold of DynamoDB is set too low.' 
but got unexpected exception: 
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 

Put tombstone on s3a://fake-bucket/moved-here: 
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ProvisionedThroughputExceededException; 
Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. 
(Service: AmazonDynamoDBv2; Status Code: 400; 
Error Code: ProvisionedThroughputExceededException; Request ID: 
L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
13:06:22at 
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
13:06:22at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:193)
13:06:22at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
13:06:22at 
org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
13:06:22at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
13:06:22at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
13:06:22at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
13:06:22at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerDelete(DynamoDBMetadataStore.java:490)
13:06:22at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.deleteSubtree(DynamoDBMetadataStore.java:520)
13:06:22at 
org.apache.hadoop.fs.s3a.scale.AbstractITestS3AMetadataStoreScale.clearMetadataStore(AbstractITestS3AMetadataStoreScale.java:196)
13:06:22at 
org.apache.hadoop.fs.s3a.scale.AbstractITestS3AMetadataStoreScale.test_020_Moves(AbstractITestS3AMetadataStoreScale.java:138)
13:06:22at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale.test_020_Moves(ITestDynamoDBMetadataStoreScale.java:184)
13:06:22at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
13:06:22at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
13:06:22at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
13:06:22at java.lang.reflect.Method.invoke(Method.java:498)
13:06:22at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
13:06:22at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
13:06:22at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
13:06:22at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
13:06:22at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
13:06:22at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
13:06:22at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
13:06:22at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
13:06:22 Caused by: 
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. (Service: 
AmazonDynamoDBv2; Status Code: 400; Error Code: 
ProvisionedThroughputExceededException; Request ID: 
L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
13:06:22at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
13:06:22at 

[jira] [Created] (HADOOP-16642) ITestDynamoDBMetadataStoreScale failing as the error text does not match expectations

2019-10-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16642:
---

 Summary: ITestDynamoDBMetadataStoreScale failing as the error text 
does not match expectations
 Key: HADOOP-16642
 URL: https://issues.apache.org/jira/browse/HADOOP-16642
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
isn't PAYG. Its failing with the wrong text being returned.

Proposed: don't look for any text

{code} 
13:06:22 java.lang.AssertionError: 
13:06:22 Expected throttling message:  Expected to find ' This may be because 
the write threshold of DynamoDB is set too low.' 
but got unexpected exception: 
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 

Put tombstone on s3a://fake-bucket/moved-here: 
com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ProvisionedThroughputExceededException; 
Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
The level of configured provisioned throughput for the table was exceeded. 
Consider increasing your provisioning level with the UpdateTable API. 
(Service: AmazonDynamoDBv2; Status Code: 400; 
Error Code: ProvisionedThroughputExceededException; Request ID: 
L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
13:06:22at 
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
13
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16633) Tune hadoop-aws parallel test surefire/failsafe settings

2019-10-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946024#comment-16946024
 ] 

Steve Loughran commented on HADOOP-16633:
-

You can enable these tests today. I'm going with

-Dparallel-tests -DtestsThreadCount=8  -Dfailsafe.runOrder=random -Dscale

And I've added surefire-* to my global git ignore. I don't have any real 
understanding of the balanced options...really I want slow tasks to run first 
so we don't get held up by a single straggler in the parallel phase

> Tune hadoop-aws parallel test surefire/failsafe settings
> 
>
> Key: HADOOP-16633
> URL: https://issues.apache.org/jira/browse/HADOOP-16633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3, test
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> We can do more to improve our test runs by looking at the failsafe docs
> [http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]
>  
> * default value to be by core, eg. 2C
> * use the runorder attribute which determines how parallel runs are chosen; 
> random for better nondeterminism
>  We'd need to experiment first, which can be done by setting failsafe.runOrder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16498) AzureADAuthenticator cannot authenticate in china

2019-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16498.
-
Resolution: Duplicate

Fixed in HADOOP-16587

> AzureADAuthenticator cannot authenticate in china
> -
>
> Key: HADOOP-16498
> URL: https://issues.apache.org/jira/browse/HADOOP-16498
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> you cant auth with Azure China as it always tries to login at the global 
> endpoint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

+1, merged to trunk. 

I tried to cherrypick into branch-3.2 but there was merge conflict -I think at 
least one other patch needs to go in first

> Make AAD endpoint configurable on all Auth flows
> 
>
> Key: HADOOP-16587
> URL: https://issues.apache.org/jira/browse/HADOOP-16587
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16587.001.patch, HADOOP-16587.002.patch, 
> HADOOP-16587.003.patch, HADOOP-16587.004.patch
>
>
> Make AAD endpoint configurable on all Auth flows. Currently auth endpoint is 
> hard coded for refreshtoken flow and MSI flow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16626.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

merged to trunk. Will review other tests which try to unset config options to 
see if they are also exposed to this "quirk" of the Configuration class

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
> at 
> 

[jira] [Commented] (HADOOP-16633) Tune hadoop-aws parallel test surefire/failsafe settings

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944803#comment-16944803
 ] 

Steve Loughran commented on HADOOP-16633:
-

need to add .surefire-* as a .gitignore pattern or your dir fills with noise

surefire-294760398D3959E2513AEBE592B10FF704BE4546
.surefire-453A152B0DF35F464F936BCAF7F407CA7671BA43
.surefire-8E57C79C17893C5B3E00303D6B725D5697467205
.surefire-9665BD9E6464CE0800434B669411DE367370118B
.surefire-DF7F2A2482F864AE5F5D482F70ED6DFD18C51669

> Tune hadoop-aws parallel test surefire/failsafe settings
> 
>
> Key: HADOOP-16633
> URL: https://issues.apache.org/jira/browse/HADOOP-16633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3, test
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> We can do more to improve our test runs by looking at the failsafe docs
> [http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]
>  
> * default value to be by core, eg. 2C
> * use the runorder attribute which determines how parallel runs are chosen; 
> random for better nondeterminism
>  We'd need to experiment first, which can be done by setting failsafe.runOrder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16634) S3A ITest failures without S3Guard

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944802#comment-16944802
 ] 

Steve Loughran commented on HADOOP-16634:
-

This has shown HADOOP-16635; I'm creating a PR for that and adding all the 
other failures so that unguarded runs will pass the way they are meant to

> S3A ITest failures without S3Guard
> --
>
> Key: HADOOP-16634
> URL: https://issues.apache.org/jira/browse/HADOOP-16634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> This has probably been lurking for a while but we hadn't noticed because if 
> your auth-keys xml settings mark a specific store as guarded, then the maven 
> CLI settings aren't picked up. Remove those bindings and things fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus scans for directories-only still does a HEAD

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16635:

Summary: S3A innerGetFileStatus scans for directories-only still does a 
HEAD  (was: S3A innerGetFileStatus scans for directories only still does a HEAD)

> S3A innerGetFileStatus scans for directories-only still does a HEAD
> ---
>
> Key: HADOOP-16635
> URL: https://issues.apache.org/jira/browse/HADOOP-16635
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> The patch in HADOOP-16490 is incomplete: we are still checking for the Head 
> of each object, even though we only wanted the directory checks. As a result, 
> createFile is still vulnerable to 404 caching on unguarded S3 repos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16635) S3A innerGetFileStatus scans for directories only still does a HEAD

2019-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16635:
---

 Summary: S3A innerGetFileStatus scans for directories only still 
does a HEAD
 Key: HADOOP-16635
 URL: https://issues.apache.org/jira/browse/HADOOP-16635
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The patch in HADOOP-16490 is incomplete: we are still checking for the Head of 
each object, even though we only wanted the directory checks. As a result, 
createFile is still vulnerable to 404 caching on unguarded S3 repos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16633) Tune hadoop-aws parallel test surefire/failsafe settings

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944779#comment-16944779
 ] 

Steve Loughran commented on HADOOP-16633:
-

bq.  alphabetical, reversealphabetical, random, hourly (alphabetical on even 
hours, reverse alphabetical on odd hours), failedfirst, balanced and filesystem.

> Tune hadoop-aws parallel test surefire/failsafe settings
> 
>
> Key: HADOOP-16633
> URL: https://issues.apache.org/jira/browse/HADOOP-16633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3, test
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> We can do more to improve our test runs by looking at the failsafe docs
> [http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]
>  
> * default value to be by core, eg. 2C
> * use the runorder attribute which determines how parallel runs are chosen; 
> random for better nondeterminism
>  We'd need to experiment first, which can be done by setting failsafe.runOrder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16634) S3A ITest failures without S3Guard

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944774#comment-16944774
 ] 

Steve Loughran commented on HADOOP-16634:
-

{code:java}
[INFO] 
[ERROR] Failures: 
[ERROR]   
ITestS3AFileOperationCost.testCostOfGetFileStatusOnEmptyDir:105->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 Count of object_metadata_requests starting=10 current=12 diff=2: 
object_metadata_requests expected:<1> but was:<2>
[ERROR]   
ITestS3GuardTtl.testListingFilteredExpiredItems:335->getDirListingMetadata:367 
[Metastrore directory listing of 
s3a://hwdev-steve-ireland-new/fork-0001/test/testListingFilteredExpiredItems] 
Expecting actual not to be null
[ERROR]   
ITestS3GuardTtl.testListingFilteredExpiredItems:335->getDirListingMetadata:367 
[Metastrore directory listing of 
s3a://hwdev-steve-ireland-new/fork-0001/test/testListingFilteredExpiredItems] 
Expecting actual not to be null
[ERROR] Errors: 
[ERROR] 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.testMultiAuthPath(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)
[ERROR]   Run 1: ITestAuthoritativePath.setup:63 » AssumptionViolated FS needs 
to have a metada...
[ERROR]   Run 2: ITestAuthoritativePath.teardown:98->cleanUpFS:87 NullPointer
[INFO] 
[ERROR] 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.testPrefixVsDirectory(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)
[ERROR]   Run 1: ITestAuthoritativePath.setup:63 » AssumptionViolated FS needs 
to have a metada...
[ERROR]   Run 2: ITestAuthoritativePath.teardown:98->cleanUpFS:87 NullPointer
[INFO] 
[ERROR] 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.testSingleAuthPath(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)
[ERROR]   Run 1: ITestAuthoritativePath.setup:63 » AssumptionViolated FS needs 
to have a metada...
[ERROR]   Run 2: ITestAuthoritativePath.teardown:98->cleanUpFS:87 NullPointer
[INFO] 

 {code}

> S3A ITest failures without S3Guard
> --
>
> Key: HADOOP-16634
> URL: https://issues.apache.org/jira/browse/HADOOP-16634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> This has probably been lurking for a while but we hadn't noticed because if 
> your auth-keys xml settings mark a specific store as guarded, then the maven 
> CLI settings aren't picked up. Remove those bindings and things fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16634) S3A ITest failures without S3Guard

2019-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16634:
---

 Summary: S3A ITest failures without S3Guard
 Key: HADOOP-16634
 URL: https://issues.apache.org/jira/browse/HADOOP-16634
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


This has probably been lurking for a while but we hadn't noticed because if 
your auth-keys xml settings mark a specific store as guarded, then the maven 
CLI settings aren't picked up. Remove those bindings and things fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16591) S3A ITest*MRjob failures

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16591:

Parent: HADOOP-15620
Issue Type: Sub-task  (was: Test)

> S3A ITest*MRjob failures
> 
>
> Key: HADOOP-16591
> URL: https://issues.apache.org/jira/browse/HADOOP-16591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Major
> Fix For: 3.3.0
>
>
> ITest*MRJob fail with a FileNotFoundException
> {code}
> [ERROR]   
> ITestMagicCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
>  » FileNotFound
> [ERROR]   
> ITestDirectoryCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
>  » FileNotFound
> [ERROR]   
> ITestPartitionCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
>  » FileNotFound
> [ERROR]   
> ITestStagingCommitMRJob>AbstractITCommitMRJob.testMRJob:146->AbstractFSContractTestBase.assertIsDirectory:327
>  » FileNotFound
> {code}
> Details here: 
> https://issues.apache.org/jira/browse/HADOOP-16207?focusedCommentId=16933718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16933718
> Creating a separate jira since HADOOP-16207 already has a patch which is 
> trying to parallelize the test runs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16632) Speculating & Partitioned S3A magic committers can leave pending files under __magic

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16632:

Summary: Speculating & Partitioned S3A magic committers can leave pending 
files under __magic  (was: Partitioned S3A magic committers can leave pending 
files under __magic, maybe uploads)

> Speculating & Partitioned S3A magic committers can leave pending files under 
> __magic
> 
>
> Key: HADOOP-16632
> URL: https://issues.apache.org/jira/browse/HADOOP-16632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Partitioned S3A magic committers can leaving pending files, maybe upload data
> This surfaced in an assertion failure on a parallel test run.
> I thought it was actually a test failure, but with HADOOP-16207 all the docs 
> are preserved in the local FS and I can understand what happened.
> h3. Junit process
> {code}
> [INFO] 
> [ERROR] Failures: 
> [ERROR] 
> ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356
>  Expected a java.io.FileNotFoundException to be thrown, but got the result: : 
> "Found magic dir which should have been deleted at 
> S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic;
>  isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
> group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
> isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
> versionId=null
> [s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
> s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending
> {code}
> Full details to follow in the comment as they are, well, detailed.
>  
> Key point: AM-side job and task cleanup can happen before the worker task 
> finishes its writes. This will result in files under __magic. It may result 
> in pending uploads too -but only if the write began after the AM job cleanup 
> did a list + abort of all pending uploads under the destination directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16633) Tune hadoop-aws parallel test surefire/failsafe settings

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16633:

Description: 
We can do more to improve our test runs by looking at the failsafe docs

[http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]

 

* default value to be by core, eg. 2C

* use the runorder attribute which determines how parallel runs are chosen; 
random for better nondeterminism

 We'd need to experiment first, which can be done by setting failsafe.runOrder

  was:
Maven failsafe and surefire parallel runners support a runorder attribute which 
determines how parallel runs are chosen

[http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]


> Tune hadoop-aws parallel test surefire/failsafe settings
> 
>
> Key: HADOOP-16633
> URL: https://issues.apache.org/jira/browse/HADOOP-16633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3, test
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> We can do more to improve our test runs by looking at the failsafe docs
> [http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]
>  
> * default value to be by core, eg. 2C
> * use the runorder attribute which determines how parallel runs are chosen; 
> random for better nondeterminism
>  We'd need to experiment first, which can be done by setting failsafe.runOrder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16570) S3A committers leak threads/raises OOM on job/task commit at scale

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16570.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

done. I do have a backlog of things to go to 3.2 and 3.1, this should be on it. 
Most people won't hit the job commit problem, but thread pool leaks -inevitably

> S3A committers leak threads/raises OOM on job/task commit at scale
> --
>
> Key: HADOOP-16570
> URL: https://issues.apache.org/jira/browse/HADOOP-16570
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> The fixed size ThreadPool created in AbstractS3ACommitter doesn't get cleaned 
> up at EOL; as a result you leak the no. of threads set in 
> "fs.s3a.committer.threads"
> Not visible in MR/distcp jobs, but ultimately causes OOM on Spark



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16633) Tune hadoop-aws parallel test surefire/failsafe settings

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16633:

Summary: Tune hadoop-aws parallel test surefire/failsafe settings  (was: 
Tunehadoop-aws parallel tests to use balanced runorder for parallel tests)

> Tune hadoop-aws parallel test surefire/failsafe settings
> 
>
> Key: HADOOP-16633
> URL: https://issues.apache.org/jira/browse/HADOOP-16633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3, test
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Major
>
> Maven failsafe and surefire parallel runners support a runorder attribute 
> which determines how parallel runs are chosen
> [http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16633) Tunehadoop-aws parallel tests to use balanced runorder for parallel tests

2019-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16633:
---

 Summary: Tunehadoop-aws parallel tests to use balanced runorder 
for parallel tests
 Key: HADOOP-16633
 URL: https://issues.apache.org/jira/browse/HADOOP-16633
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3, test
Affects Versions: 3.2.1
Reporter: Steve Loughran


Maven failsafe and surefire parallel runners support a runorder attribute which 
determines how parallel runs are chosen

[http://maven.apache.org/surefire/maven-failsafe-plugin/integration-test-mojo.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16632) Partitioned S3A magic committers can leave pending files under __magic, maybe uploads

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944698#comment-16944698
 ] 

Steve Loughran commented on HADOOP-16632:
-

(FWIW this is surfacing because with the move of the committers back to the 
parallel phase, 12 threads + scale overloads my laptop. I need to either get 
hold of a better box or use smaller thread counts)

> Partitioned S3A magic committers can leave pending files under __magic, maybe 
> uploads
> -
>
> Key: HADOOP-16632
> URL: https://issues.apache.org/jira/browse/HADOOP-16632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Partitioned S3A magic committers can leaving pending files, maybe upload data
> This surfaced in an assertion failure on a parallel test run.
> I thought it was actually a test failure, but with HADOOP-16207 all the docs 
> are preserved in the local FS and I can understand what happened.
> h3. Junit process
> {code}
> [INFO] 
> [ERROR] Failures: 
> [ERROR] 
> ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356
>  Expected a java.io.FileNotFoundException to be thrown, but got the result: : 
> "Found magic dir which should have been deleted at 
> S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic;
>  isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
> group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
> isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
> versionId=null
> [s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
> s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending
> {code}
> Full details to follow in the comment as they are, well, detailed.
>  
> Key point: AM-side job and task cleanup can happen before the worker task 
> finishes its writes. This will result in files under __magic. It may result 
> in pending uploads too -but only if the write began after the AM job cleanup 
> did a list + abort of all pending uploads under the destination directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16632) Partitioned S3A magic committers can leave pending files under __magic, maybe uploads

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944624#comment-16944624
 ] 

Steve Loughran commented on HADOOP-16632:
-

This test is failing because the successful job should have deleted the __magic 
directory, but the assertions finds the output of an attempt. Which is 
potentially the sign of something going very wrong,

As it is -all is good, except for the cleanup, which is beyond our control.

For some reason this task was speculatively executed, and the second attempt 
`_0003_m_08_1` was not committed before the job completed and the AM 
terminated. When the worker requested permission to commit, it couldn't connect 
over the (broken) RPC channel, retried a bit and gave up.
h3. task attempt_1570197469968_0003_m_08_1
{code:java}
2019-10-04 15:02:34,237 INFO [main] 
org.apache.hadoop.fs.s3a.commit.magic.MagicCommitTracker: File 
s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
 is written as magic file to path 
fork-0001/test/ITestS3ACommitterMRJob-execute-magic/part-m-8
2019-10-04 15:02:35,717 INFO [main] 
org.apache.hadoop.fs.s3a.commit.magic.MagicCommitTracker: Uncommitted data 
pending to file 
s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8;
 commit metadata for 1 parts in 
fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending.
 sixe: 4690 byte(s)
2019-10-04 15:02:36,877 INFO [main] 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream: File 
fork-0001/test/ITestS3ACommitterMRJob-execute-magic/part-m-8 will be 
visible when the job is committed
2019-10-04 15:02:36,888 INFO [main] org.apache.hadoop.mapred.Task: 
Task:attempt_1570197469968_0003_m_08_1 is done. And is in the process of 
committing
2019-10-04 15:02:36,889 INFO [main] 
org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter: Starting: 
needsTaskCommit task attempt_1570197469968_0003_m_08_1
2019-10-04 15:02:39,118 INFO [main] 
org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter: needsTaskCommit 
task attempt_1570197469968_0003_m_08_1: duration 0:02.230s
2019-10-04 15:02:39,124 WARN [main] org.apache.hadoop.mapred.Task: Failure 
sending commit pending: java.io.EOFException: End of File Exception between 
local host is: "HW13176-2.local/192.168.1.6"; destination host is: 
"localhost":57166; : java.io.EOFException; For more details see:  
http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:837)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:791)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1557)
at org.apache.hadoop.ipc.Client.call(Client.java:1499)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:251)
at com.sun.proxy.$Proxy8.commitPending(Unknown Source)
at org.apache.hadoop.mapred.Task.done(Task.java:1253)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:351)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1872)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1182)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1078)
{code}
h3. task attempt_1570197469968_0003_m_08_1

And it looks like MR task failure is a no-warning System.exit(-1) call -the 
workers do not get a chance to do any cleanup themselves.
{code:java}
2019-10-04 15:03:39,315 WARN [communication thread] 
org.apache.hadoop.mapred.Task: Last retry, killing 
attempt_1570197469968_0003_m_08_1
{code}
An another attempt at the same task had already been committed. Thus: the 
output of this attempt was not needed - all is good.

[jira] [Updated] (HADOOP-16632) Partitioned S3A magic committers can leave pending files under __magic, maybe uploads

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16632:

Summary: Partitioned S3A magic committers can leave pending files under 
__magic, maybe uploads  (was: Partitioned S3A magic committers can leaving 
pending files, maybe upload data)

> Partitioned S3A magic committers can leave pending files under __magic, maybe 
> uploads
> -
>
> Key: HADOOP-16632
> URL: https://issues.apache.org/jira/browse/HADOOP-16632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Partitioned S3A magic committers can leaving pending files, maybe upload data
> This surfaced in an assertion failure on a parallel test run.
> I thought it was actually a test failure, but with HADOOP-16207 all the docs 
> are preserved in the local FS and I can understand what happened.
> h3. Junit process
> {code}
> [INFO] 
> [ERROR] Failures: 
> [ERROR] 
> ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356
>  Expected a java.io.FileNotFoundException to be thrown, but got the result: : 
> "Found magic dir which should have been deleted at 
> S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic;
>  isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
> group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
> isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
> versionId=null
> [s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
> s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending
> {code}
> Full details to follow in the comment as they are, well, detailed.
>  
> Key point: AM-side job and task cleanup can happen before the worker task 
> finishes its writes. This will result in files under __magic. It may result 
> in pending uploads too -but only if the write began after the AM job cleanup 
> did a list + abort of all pending uploads under the destination directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16632) Partitioned S3A magic committers can leaving pending files, maybe upload data

2019-10-04 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16632:
---

 Summary: Partitioned S3A magic committers can leaving pending 
files, maybe upload data
 Key: HADOOP-16632
 URL: https://issues.apache.org/jira/browse/HADOOP-16632
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.3, 3.2.1
Reporter: Steve Loughran
Assignee: Steve Loughran


Partitioned S3A magic committers can leaving pending files, maybe upload data

This surfaced in an assertion failure on a parallel test run.

I thought it was actually a test failure, but with HADOOP-16207 all the docs 
are preserved in the local FS and I can understand what happened.

h3. Junit process
{code}
[INFO] 
[ERROR] Failures: 
[ERROR] 
ITestS3ACommitterMRJob.test_200_execute:344->customPostExecutionValidation:356 
Expected a java.io.FileNotFoundException to be thrown, but got the result: : 
"Found magic dir which should have been deleted at 
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic;
 isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null
[s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8
s3a://hwdev-steve-ireland-new/fork-0001/test/ITestS3ACommitterMRJob-execute-magic/__magic/app-attempt-0001/tasks/attempt_1570197469968_0003_m_08_1/__base/part-m-8.pending
{code}

Full details to follow in the comment as they are, well, detailed.

 

Key point: AM-side job and task cleanup can happen before the worker task 
finishes its writes. This will result in files under __magic. It may result in 
pending uploads too -but only if the write began after the AM job cleanup did a 
list + abort of all pending uploads under the destination directory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16207) Improved S3A MR tests

2019-10-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944488#comment-16944488
 ] 

Steve Loughran commented on HADOOP-16207:
-

Final patch replaces the committer-specific terasort and MR test jobs with 
parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.

 

We also have the s3guard  operations log enabled; on guarded runs this tracks 
all PUT/DELETE/TOMBSTONE calls made of a store, so acts as the log of what 
changes were made there. If we see intermittent issues here again (And after 
the HADOOP-16570 changes) then we are better positioned to understand the 
failures

> Improved S3A MR tests
> -
>
> Key: HADOOP-16207
> URL: https://issues.apache.org/jira/browse/HADOOP-16207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Reported failure of {{ITestDirectoryCommitMRJob}} in validation runs of 
> HADOOP-16186; assertIsDirectory with s3guard enabled and a parallel test run: 
> Path "is recorded as deleted by S3Guard"
> {code}
> waitForConsistency();
> assertIsDirectory(outputPath) /* here */
> {code}
> The file is there but there's a tombstone. Possibilities
> * some race condition with another test
> * tombstones aren't timing out
> * committers aren't creating that base dir in a way which cleans up S3Guard's 
> tombstones. 
> Remember: we do have to delete that dest dir before the committer runs unless 
> overwrite==true, so at the start of the run there will be a tombstone. It 
> should be overwritten by a success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16207) Improved S3A MR tests

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16207.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Improved S3A MR tests
> -
>
> Key: HADOOP-16207
> URL: https://issues.apache.org/jira/browse/HADOOP-16207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Fix For: 3.3.0
>
>
> Reported failure of {{ITestDirectoryCommitMRJob}} in validation runs of 
> HADOOP-16186; assertIsDirectory with s3guard enabled and a parallel test run: 
> Path "is recorded as deleted by S3Guard"
> {code}
> waitForConsistency();
> assertIsDirectory(outputPath) /* here */
> {code}
> The file is there but there's a tombstone. Possibilities
> * some race condition with another test
> * tombstones aren't timing out
> * committers aren't creating that base dir in a way which cleans up S3Guard's 
> tombstones. 
> Remember: we do have to delete that dest dir before the committer runs unless 
> overwrite==true, so at the start of the run there will be a tombstone. It 
> should be overwritten by a success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16207) Improved S3A MR tests

2019-10-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16207:

Summary: Improved S3A MR tests  (was: Fix 
ITestDirectoryCommitMRJob.testMRJob)

> Improved S3A MR tests
> -
>
> Key: HADOOP-16207
> URL: https://issues.apache.org/jira/browse/HADOOP-16207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Reported failure of {{ITestDirectoryCommitMRJob}} in validation runs of 
> HADOOP-16186; assertIsDirectory with s3guard enabled and a parallel test run: 
> Path "is recorded as deleted by S3Guard"
> {code}
> waitForConsistency();
> assertIsDirectory(outputPath) /* here */
> {code}
> The file is there but there's a tombstone. Possibilities
> * some race condition with another test
> * tombstones aren't timing out
> * committers aren't creating that base dir in a way which cleans up S3Guard's 
> tombstones. 
> Remember: we do have to delete that dest dir before the committer runs unless 
> overwrite==true, so at the start of the run there will be a tombstone. It 
> should be overwritten by a success.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943880#comment-16943880
 ] 

Steve Loughran commented on HADOOP-16626:
-

OK. I have now learned something.

When you call Configuration.addResource() it reloads all configs, so
all settings you've previously cleared get set again.

And we force in the contract/s3a.xml settings, don't we?

I'm going to change how we load that file (which declares the expected FS 
behaviour in the common tests). I'm going to make that load optional and only 
load it in those s3a contract tests, not the other S3A tests.


(pause)
Actually, that's not enough! The first call to Filesystem.get() will force 
service discovery of all filesystems, which will force their class 
instantiation, and then any class which forces in a config (HDFS) triggers this.

{code}
Breakpoint reached
  at 
org.apache.hadoop.conf.Configuration.addDefaultResource(Configuration.java:893)
  at 
org.apache.hadoop.mapreduce.util.ConfigUtil.loadResources(ConfigUtil.java:43)
  at org.apache.hadoop.mapred.JobConf.(JobConf.java:123)
  at java.lang.Class.forName0(Class.java:-1)
  at java.lang.Class.forName(Class.java:348)
  at 
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2603)
  at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:96)
  at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:79)
  at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
  at org.apache.hadoop.security.Groups.(Groups.java:106)
  at org.apache.hadoop.security.Groups.(Groups.java:102)
  at 
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:451)
  at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:355)
  at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317)
  at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1989)
  at 
org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:746)
  at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:696)
  at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607)
  at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.(ViewFileSystem.java:230)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance0(NativeConstructorAccessorImpl.java:-1)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at java.lang.Class.newInstance(Class.java:442)
  at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
  at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
  at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
  at 
org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:3310)
  at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3355)
  at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3394)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:500)
  at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
  at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:178)
  at 
org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:55)
  at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.setup(ITestRestrictedReadAccess.java:233)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-1)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
  at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 

[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943699#comment-16943699
 ] 

Steve Loughran commented on HADOOP-16626:
-

Looking at this.

The issue is not just that the test fails for Sid;
It is that it works for me. Why? The code which tries to disable S3Guard
isn't picked up: be power bucket settings are overriding what we've chosen.

This is unfortunate, because were trying to unset those in 
removeBaseAndBucketOverrides(). I'm going to look at this in more detail.
Only once I fix test setup to replicate the problem will I look at fixing it, 
which is simply one of "lists will fail without read access on raw"

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended 

[jira] [Work started] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16626 started by Steve Loughran.
---
> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
> at 
> 

[jira] [Commented] (HADOOP-16573) IAM role created by S3A DT doesn't include DynamoDB scan

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943569#comment-16943569
 ] 

Steve Loughran commented on HADOOP-16573:
-

or just ask for tags and scan permissions. Tag so that on dynamic setting of 
tag from version works. Not very important as we should have tagged the table 
on the client already

> IAM role created by S3A DT doesn't include DynamoDB scan
> 
>
> Key: HADOOP-16573
> URL: https://issues.apache.org/jira/browse/HADOOP-16573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> You can't run {{s3guard prune}} with role DTs as we don't create it with 
> permissons to do so.
> I think it may actually be useful to have an option where we don't restrict 
> the role. This doesn't just help with debugging, it would let things like SQS 
> integration pick up the creds from S3A.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943488#comment-16943488
 ] 

Steve Loughran commented on HADOOP-16626:
-

caused by HADOOP-16458 

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
> at 
> 

[jira] [Updated] (HADOOP-16605) NPE in TestAdlSdkConfiguration failing in yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16605:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the patch in. Thanks!

> NPE in TestAdlSdkConfiguration failing in yetus
> ---
>
> Key: HADOOP-16605
> URL: https://issues.apache.org/jira/browse/HADOOP-16605
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>
> Yetus builds are failing with NPE in TestAdlSdkConfiguration if they go near 
> hadoop-azure-datalake. Assuming HADOOP-16438 until proven differently, though 
> HADOOP-16371 may have done something too (how?), something which wasn't 
> picked up as yetus didn't know that hadoo-azuredatalake was affected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943481#comment-16943481
 ] 

Steve Loughran commented on HADOOP-16625:
-

LGTM.

> Backport HADOOP-14624 to branch-3.1
> ---
>
> Key: HADOOP-16625
> URL: https://issues.apache.org/jira/browse/HADOOP-16625
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16625.branch-3.1.001.patch
>
>
> I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of 
> them do not compile because of the commons-logging to slf4j migration. 
> One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger 
> API.
> Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the 
> DelayAnswer signature, but it's in the test scope, so we're not really 
> breaking backward compat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
15s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 53s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30fa865f3df9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / ae8839e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Issue Comment Deleted] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
56s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
51s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 53s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfb5e424c863 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / ae8839e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Issue Comment Deleted] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 46s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
24s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 928f2a3c344c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Issue Comment Deleted] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
24s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 199703f73a5a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / 8c70728 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Issue Comment Deleted] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Comment: was deleted

(was: | (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d521c0a1f77 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943464#comment-16943464
 ] 

Steve Loughran commented on HADOOP-16626:
-

The text file system created here. has list access but not Head/Get access.

Looking at the stack I don't see how the raw check could work at all here, 
Because we call getFileStatus before the LIST. With S3Guard, fine,
provided the entry is in the table. But raw -it should always fail.

So why don't I see that? As I am clearing the bucket settings?
I will look with a debugger.

FWIW, I do hope/plan to actually remove those getFileStatus calls
before list operations which are normally called against directories
-the list* operations, essentially. They should do the list first,
And only if that fails to find anything, fallback to the getFileStatus
probes for file or marker. This should make a big difference during query 
planning, and stop markers being mistaken to empty directories.

This means whatever changes I do to fix this regression will have to be rolled 
back later. Never mind

Thanks for finding this. 

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>

[jira] [Updated] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16626:

Parent: HADOOP-15620
Issue Type: Sub-task  (was: Test)

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
> at 
> 

[jira] [Assigned] (HADOOP-16626) S3A ITestRestrictedReadAccess fails

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16626:
---

Assignee: Steve Loughran

> S3A ITestRestrictedReadAccess fails
> ---
>
> Key: HADOOP-16626
> URL: https://issues.apache.org/jira/browse/HADOOP-16626
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Reporter: Siddharth Seth
>Assignee: Steve Loughran
>Priority: Major
>
> Just tried running the S3A test suite. Consistently seeing the following.
> Command used 
> {code}
> mvn -T 1C  verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth 
> -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess
> {code}
> cc [~ste...@apache.org]
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> ---
> Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess
> testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)
>   Time elapsed: 2.841 s  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on 
> test/testNoReadAccess-raw/noReadDir/emptyDir/: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403
>  Forbidden
> at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356)
> at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360)
> at 
> org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden 
> (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> FE8B4D6F25648BCD; S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=),
>  S3 Extended Request ID: 
> hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
> at 
> 

[jira] [Updated] (HADOOP-15729) [s3a] stop treat fs.s3a.max.threads as the long-term minimum

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15729:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Closing as fixed.

Sean, how far back should this patch go?

> [s3a] stop treat fs.s3a.max.threads as the long-term minimum
> 
>
> Key: HADOOP-15729
> URL: https://issues.apache.org/jira/browse/HADOOP-15729
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15729.001.patch, HADOOP-15729.002.patch
>
>
> A while ago the s3a connector started experiencing deadlocks because the AWS 
> SDK requires an unbounded threadpool. It places monitoring tasks on the work 
> queue before the tasks they wait on, so it's possible (has even happened with 
> larger-than-default threadpools) for the executor to become permanently 
> saturated and deadlock.
> So we started giving an unbounded threadpool executor to the SDK, and using a 
> bounded, blocking threadpool service for everything else S3A needs (although 
> currently that's only in the S3ABlockOutputStream). fs.s3a.max.threads then 
> only limits this threadpool, however we also specified fs.s3a.max.threads as 
> the number of core threads in the unbounded threadpool, which in hindsight is 
> pretty terrible.
> Currently those core threads do not timeout, so this is actually setting a 
> sort of minimum. Once that many tasks have been submitted, the threadpool 
> will be locked at that number until it bursts beyond that, but it will only 
> spin down that far. If fs.s3a.max.threads is set reasonably high and someone 
> uses a bunch of S3 buckets, they could easily have thousands of idle threads 
> constantly.
> We should either not use fs.s3a.max.threads for the corepool size and 
> introduce a new configuration, or we should simply allow core threads to 
> timeout. I'm reading the OpenJDK source now to see what subtle differences 
> there are between core threads and other threads if core threads can timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  1m 
17s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1481/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1481 |
| JIRA Issue | HADOOP-16587 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4849283257e3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 2e1fd44 |
| Default Java 

[jira] [Issue Comment Deleted] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HADOOP-16587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981827/HADOOP-16587.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 20af344010e3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4d3c580 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16561/testReport/ |
| Max. process+thread count | 318 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16561/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> Make AAD endpoint configurable on all Auth flows
> 

[jira] [Issue Comment Deleted] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  1m 
20s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1481/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1481 |
| JIRA Issue | HADOOP-16587 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux d223bbc06f31 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4d3c580 |
| Default Java 

[jira] [Issue Comment Deleted] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HADOOP-16587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12980978/HADOOP-16587.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c9d605832397 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a94aa1f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16539/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16539/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 

[jira] [Issue Comment Deleted] (HADOOP-16587) Make AAD endpoint configurable on all Auth flows

2019-10-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16587:

Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-16587 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981816/HADOOP-16587.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16560/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> Make AAD endpoint configurable on all Auth flows
> 
>
> Key: HADOOP-16587
> URL: https://issues.apache.org/jira/browse/HADOOP-16587
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Attachments: HADOOP-16587.001.patch, HADOOP-16587.002.patch, 
> HADOOP-16587.003.patch, HADOOP-16587.004.patch
>
>
> Make AAD endpoint configurable on all Auth flows. Currently auth endpoint is 
> hard coded for refreshtoken flow and MSI flow.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >