[GitHub] [hadoop] liuml07 commented on a change in pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread GitBox
liuml07 commented on a change in pull request #1840: HADOOP-16853. 
ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
URL: https://github.com/apache/hadoop/pull/1840#discussion_r378067087
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardOutOfBandOperations.java
 ##
 @@ -969,16 +970,42 @@ private void deleteFileInListing()
   deleteFile(rawFS, testFilePath);
 
   // File status will be still readable from s3guard
-  FileStatus status = guardedFs.getFileStatus(testFilePath);
+  S3AFileStatus status = (S3AFileStatus)
+  guardedFs.getFileStatus(testFilePath);
   LOG.info("authoritative: {} status: {}", allowAuthoritative, status);
-  expectExceptionWhenReading(testFilePath, text);
-  expectExceptionWhenReadingOpenFileAPI(testFilePath, text, null);
-  expectExceptionWhenReadingOpenFileAPI(testFilePath, text, status);
+  if (isVersionedChangeDetection() && status.getVersionId() != null) {
+// when the status entry has a version ID, then that may be used
+// when opening the file on what is clearly a versioned store.
+int length = text.length();
+byte[] bytes = readOpenFileAPI(guardedFs, testFilePath, length, null);
+Assertions.assertThat(toChar(bytes))
+.describedAs("openFile(%s)", testFilePath)
+.isEqualTo(text);
+// reading the rawFS with status will also work.
+bytes = readOpenFileAPI(rawFS, testFilePath, length, status);
 
 Review comment:
   `bytes = readOpenFileAPI(rawFS, testFilePath, length, null);` should still 
fail right? Do you think we can also that in this `if` clause?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16840) AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket

2020-02-11 Thread wujinhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035078#comment-17035078
 ] 

wujinhu commented on HADOOP-16840:
--

[~weiweiyagn666] Please help review this patch, thanks.:)

> AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket
> --
>
> Key: HADOOP-16840
> URL: https://issues.apache.org/jira/browse/HADOOP-16840
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.2.1, 3.1.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-16840.001.patch, HADOOP-16840.002.patch
>
>
> When hadoop lists object in versioning bucket with many delete marker in it, 
> OSS will return
> {code:java}
> 
> 
>   select-us-east-1
>   test/hadoop/file/
>   
>   100
>   /
>   true
>   test/hadoop/file/sub2
> 
> {code}
> It sets *IsTruncated* to true and without *ObjectSummaries* or 
> *CommonPrefixes*, and will throw FileNotFoundException
> {code:java}
> // code placeholder
> java.io.FileNotFoundException: oss://select-us-east-1/test/hadoop/file: No 
> such file or directory!java.io.FileNotFoundException: 
> oss://select-us-east-1/test/hadoop/file: No such file or directory!
>  at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.getFileStatus(AliyunOSSFileSystem.java:281)
>  at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.testGetFileStatusInVersioningBucket{code}
>  
> {code:java}
> // code placeholder
> ObjectListing listing = store.listObjects(key, 1, null, false);
> if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) ||
> CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) {
>   return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username);
> } else {
>   throw new FileNotFoundException(path + ": No such file or directory!");
> }
> {code}
>  In this case, we should call listObjects until *IsTruncated* is false or 
> *ObjectSummaries* is not empty or *CommonPrefixes* is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16857) ABFS: Optimize HttpRequest retry triggers

2020-02-11 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16857:
--

 Summary: ABFS: Optimize HttpRequest retry triggers
 Key: HADOOP-16857
 URL: https://issues.apache.org/jira/browse/HADOOP-16857
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.1
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


Currently retry logic gets triggered when access token fetch fails even with 
irrecoverable errors. Causing a large wait time for the request failure to be 
reported. 

 

Retry logic needs to be optimized to identify such access token fetch failures 
and fail fast.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt

2020-02-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16856:
---
Description: 
The following command does not install cmake by default:
{noformat}
$ sudo dnf group install 'Development Tools'{noformat}

cmake is an optional package and {{--with-optional}} should be specified.

  was:
The following command does not install cmake by default:
{noformat}
$ sudo dnf group install 'Development Tools'{noformat}


> cmake is missing in the CentOS 8 section of BUILDING.txt
> 
>
> Key: HADOOP-16856
> URL: https://issues.apache.org/jira/browse/HADOOP-16856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>
> The following command does not install cmake by default:
> {noformat}
> $ sudo dnf group install 'Development Tools'{noformat}
> cmake is an optional package and {{--with-optional}} should be specified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt

2020-02-11 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035069#comment-17035069
 ] 

Akira Ajisaka commented on HADOOP-16856:


Hi [~iwasakims], would you check this?

> cmake is missing in the CentOS 8 section of BUILDING.txt
> 
>
> Key: HADOOP-16856
> URL: https://issues.apache.org/jira/browse/HADOOP-16856
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>
> The following command does not install cmake by default:
> {noformat}
> $ sudo dnf group install 'Development Tools'{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16856) cmake is missing in the CentOS 8 section of BUILDING.txt

2020-02-11 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16856:
--

 Summary: cmake is missing in the CentOS 8 section of BUILDING.txt
 Key: HADOOP-16856
 URL: https://issues.apache.org/jira/browse/HADOOP-16856
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, documentation
Reporter: Akira Ajisaka


The following command does not install cmake by default:
{noformat}
$ sudo dnf group install 'Development Tools'{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16855) ABFS: hadoop-dist fails to add wildfly in class path for hadoop-azure

2020-02-11 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16855:
--

 Summary: ABFS: hadoop-dist fails to add wildfly in class path for 
hadoop-azure
 Key: HADOOP-16855
 URL: https://issues.apache.org/jira/browse/HADOOP-16855
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.1
Reporter: Sneha Vijayarajan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16854) ABFS: Tune the login calculating max concurrent request count

2020-02-11 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16854:
--

 Summary: ABFS: Tune the login calculating max concurrent request 
count
 Key: HADOOP-16854
 URL: https://issues.apache.org/jira/browse/HADOOP-16854
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.1
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


Currently in environments where memory is restricted, current max concurrent 
request count logic will trigger a large number of buffers needed for the 
execution to be blocked leading to out Of Memory exceptions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035063#comment-17035063
 ] 

Hudson commented on HADOOP-16849:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17943 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17943/])
HADOOP-16849. start-build-env.sh behaves incorrectly when username is 
(aajisaka: rev 9709afe67d8ed45c3dfb53e45fe1efdc0814ac6c)
* (edit) start-build-env.sh


> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16849:
---
Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, branch-3.1, and branch-2.10. Thanks 
[~Jihyun Cho] for the contribution!

> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router
URL: https://github.com/apache/hadoop/pull/1832#issuecomment-585029529
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 55s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 37s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  8s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 33s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 112m 42s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  10m 25s |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 209m 18s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestFileCreation |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1832 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5f5a9163a6a8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b8a78d |
   | Default Java | 1.8.0_242 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/4/testReport/ |
   | Max. process+thread count | 2730 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035030#comment-17035030
 ] 

Hadoop QA commented on HADOOP-16849:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
5s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HADOOP-16849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12993021/userid.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux b899a1b0e097 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9b8a78d |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.3.7 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16760/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16760/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Minor
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>  

[jira] [Assigned] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16849:
--

Assignee: Jihyun Cho

> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Assignee: Jihyun Cho
>Priority: Minor
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034968#comment-17034968
 ] 

Akira Ajisaka commented on HADOOP-16849:


LGTM, +1 pending Jenkins.

Numeric uid is always treated as uid.
Ref: https://github.com/opencontainers/runc/pull/708

> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Priority: Minor
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034968#comment-17034968
 ] 

Akira Ajisaka edited comment on HADOOP-16849 at 2/12/20 1:42 AM:
-

LGTM, +1 pending Jenkins.

Numeric uid is always treated as numeric.
Ref: https://github.com/opencontainers/runc/pull/708


was (Author: ajisakaa):
LGTM, +1 pending Jenkins.

Numeric uid is always treated as uid.
Ref: https://github.com/opencontainers/runc/pull/708

> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Priority: Minor
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16849:
---
Status: Patch Available  (was: Open)

> start-build-env.sh behaves incorrectly when username is numeric only
> 
>
> Key: HADOOP-16849
> URL: https://issues.apache.org/jira/browse/HADOOP-16849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Jihyun Cho
>Priority: Minor
> Attachments: userid.patch
>
>
> When username is numaric only, the build environment does not run correctly.
> Here is my case.
> {noformat}
> ~/hadoop$ ./start-build-env.sh
> ...
> Successfully tagged hadoop-build-1649860140:latest
>  _   _   ___
> | | | | | |   |  _  \
> | |_| | __ _  __| | ___   ___  _ __   | | | |_   __
> |  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
> | | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
> \_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
>   | |
>   |_|
> This is the standard Hadoop Developer build environment.
> This has all the right tools installed required to build
> Hadoop from source.
> I have no name!@fceab279f8d1:~/hadoop$ whoami
> whoami: cannot find name for user ID 1112533
> I have no name!@fceab279f8d1:~/hadoop$ sudo ls
> sudo: unknown uid 1112533: who are you?
> {noformat}
> I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked 
> correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance

2020-02-11 Thread Tao Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated HADOOP-16850:
--
Attachment: HADOOP-16850.001.patch

> Support getting thread info from thread group for JvmMetrics to improve the 
> performance
> ---
>
> Key: HADOOP-16850
> URL: https://issues.apache.org/jira/browse/HADOOP-16850
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1
>Reporter: Tao Yang
>Priority: Major
> Attachments: HADOOP-16850.001.patch
>
>
> Recently we found jmx request taken almost 5s+ to be done when there were 1w+ 
> threads in a stressed datanode process, meanwhile other http requests were 
> blocked and some disk operations were affected (we can see many "Slow 
> manageWriterOsCache" messages in DN log, and these messages were hard to be 
> seen again after we stopped sending jxm requests)
> The excessive time is spent in getting thread info via ThreadMXBean inside 
> which ThreadImpl#getThreadInfo native method is called, the time complexity 
> of ThreadImpl#getThreadInfo is O(n*n) according to 
> [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds 
> global thread lock and prevents creation or termination of threads.
> To improve this, I propose to support getting thread info from thread group 
> which will improve a lot by default, also support using original approach 
> when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the 
> startup command.
> An example of performance tests between these two approaches is as follows:
> {noformat}
> #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5
> #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9
> #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15
> #Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42
> #Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92
> #Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584956951
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 55s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  9s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 28s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 39s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 35s |  root: The patch generated 4 new 
+ 75 unchanged - 2 fixed = 79 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 25s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 31s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 130m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux accd65b42e91 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b8a78d |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/testReport/ |
   | Max. process+thread count | 1348 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/11/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377978605
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+public String fsOwner;
+public String supergroup;
+public UserGroupInformation callerUgi;
+public INodeAttributes[] inodeAttrs;
+public INode[] inodes;
+public byte[][] pathByNameArr;
+public int snapshotId;
+public String path;
+public int ancestorIndex;
+public boolean doCheckOwner;
+public FsAction ancestorAccess;
+public FsAction parentAccess;
+public FsAction access;
+public FsAction subAccess;
+public boolean ignoreEmptyDir;
+public String operationName;
+public CallerContext callerContext;
+
+public static class Builder {
+  public String fsOwner;
+  public String supergroup;
+  public UserGroupInformation callerUgi;
+  public INodeAttributes[] inodeAttrs;
+  public INode[] inodes;
+  public byte[][] pathByNameArr;
+  public int snapshotId;
+  public String path;
+  public int ancestorIndex;
+  public boolean doCheckOwner;
+  public FsAction ancestorAccess;
+  public FsAction parentAccess;
+  public FsAction access;
+  public FsAction subAccess;
+  public boolean ignoreEmptyDir;
+  public String operationName;
+  public CallerContext callerContext;
+
+  public AuthorizationContext build() {
+return new AuthorizationContext(this);
+  }
+
+  public Builder fsOwner(String val) {
+this.fsOwner = val;
+return this;
+  }
+
+  public Builder supergroup(String val) {
+this.supergroup = val;
+return this;
+  }
+
+  public Builder callerUgi(UserGroupInformation val) {
+this.callerUgi = val;
+return this;
+  }
+
+  public Builder inodeAttrs(INodeAttributes[] val) {
+this.inodeAttrs = val;
+return this;
+  }
+
+  public Builder inodes(INode[] val) {
+this.inodes = val;
+return this;
+  }
+
+  public Builder pathByNameArr(byte[][] val) {
+this.pathByNameArr = val;
+return this;
+  }
+
+  public Builder snapshotId(int val) {
+this.snapshotId = val;
+return this;
+  }
+
+  public Builder path(String val) {
+this.path = val;
+return this;
+  }
+
+  public Builder ancestorIndex(int val) {
+this.ancestorIndex = val;
+return this;
+  }
+
+  public Builder doCheckOwner(boolean val) {
+this.doCheckOwner = val;
+return this;
+  }
+
+  public Builder ancestorAccess(FsAction val) {
+this.ancestorAccess = val;
+return this;
+  }
+
+  public Builder parentAccess(FsAction val) {
+this.parentAccess = val;
+return this;
+  }
+
+  public Builder access(FsAction val) {
+this.access = val;
+return this;
+  }
+
+  public Builder subAccess(FsAction val) {
+this.subAccess = val;
+return this;
+  }
+
+  public Builder ignoreEmptyDir(boolean val) {
+this.ignoreEmptyDir = val;
+return this;
+  }
+
+  public Builder operationName(String val) {
+this.operationName = val;
+return this;
+  }
+
+  public Builder callerContext(CallerContext val) {
+this.callerContext = val;
+return this;
+  }
+}
+
+public AuthorizationContext(
+String fsOwner,
+String supergroup,
+UserGroupInformation callerUgi,
+INodeAttributes[] inodeAttrs,
+INode[] inodes,
+byte[][] pathByNameArr,
+int snapshotId,
+String path,
+int ancestorIndex,
+boolean doCheckOwner,
+FsAction ancestorAccess,
+FsAction parentAccess,
+FsAction access,
+FsAction subAccess,
+boolean ignoreEmptyDir) {
+  this.fsOwner = fsOwner;
+  this.supergroup = 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377977917
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -17,19 +17,227 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.ipc.CallerContext;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public abstract class INodeAttributeProvider {
 
+  public static class AuthorizationContext {
+public String fsOwner;
+public String supergroup;
+public UserGroupInformation callerUgi;
+public INodeAttributes[] inodeAttrs;
+public INode[] inodes;
+public byte[][] pathByNameArr;
+public int snapshotId;
+public String path;
+public int ancestorIndex;
+public boolean doCheckOwner;
+public FsAction ancestorAccess;
+public FsAction parentAccess;
+public FsAction access;
+public FsAction subAccess;
+public boolean ignoreEmptyDir;
+public String operationName;
+public CallerContext callerContext;
+
+public static class Builder {
+  public String fsOwner;
+  public String supergroup;
+  public UserGroupInformation callerUgi;
+  public INodeAttributes[] inodeAttrs;
+  public INode[] inodes;
+  public byte[][] pathByNameArr;
+  public int snapshotId;
+  public String path;
+  public int ancestorIndex;
+  public boolean doCheckOwner;
+  public FsAction ancestorAccess;
+  public FsAction parentAccess;
+  public FsAction access;
+  public FsAction subAccess;
+  public boolean ignoreEmptyDir;
+  public String operationName;
+  public CallerContext callerContext;
+
+  public AuthorizationContext build() {
+return new AuthorizationContext(this);
+  }
+
+  public Builder fsOwner(String val) {
+this.fsOwner = val;
+return this;
+  }
+
+  public Builder supergroup(String val) {
+this.supergroup = val;
+return this;
+  }
+
+  public Builder callerUgi(UserGroupInformation val) {
+this.callerUgi = val;
+return this;
+  }
+
+  public Builder inodeAttrs(INodeAttributes[] val) {
+this.inodeAttrs = val;
+return this;
+  }
+
+  public Builder inodes(INode[] val) {
+this.inodes = val;
+return this;
+  }
+
+  public Builder pathByNameArr(byte[][] val) {
+this.pathByNameArr = val;
+return this;
+  }
+
+  public Builder snapshotId(int val) {
+this.snapshotId = val;
+return this;
+  }
+
+  public Builder path(String val) {
+this.path = val;
+return this;
+  }
+
+  public Builder ancestorIndex(int val) {
+this.ancestorIndex = val;
+return this;
+  }
+
+  public Builder doCheckOwner(boolean val) {
+this.doCheckOwner = val;
+return this;
+  }
+
+  public Builder ancestorAccess(FsAction val) {
+this.ancestorAccess = val;
+return this;
+  }
+
+  public Builder parentAccess(FsAction val) {
+this.parentAccess = val;
+return this;
+  }
+
+  public Builder access(FsAction val) {
+this.access = val;
+return this;
+  }
+
+  public Builder subAccess(FsAction val) {
+this.subAccess = val;
+return this;
+  }
+
+  public Builder ignoreEmptyDir(boolean val) {
+this.ignoreEmptyDir = val;
+return this;
+  }
+
+  public Builder operationName(String val) {
+this.operationName = val;
+return this;
+  }
+
+  public Builder callerContext(CallerContext val) {
+this.callerContext = val;
+return this;
+  }
+}
+
+public AuthorizationContext(
+String fsOwner,
+String supergroup,
+UserGroupInformation callerUgi,
+INodeAttributes[] inodeAttrs,
+INode[] inodes,
+byte[][] pathByNameArr,
+int snapshotId,
+String path,
+int ancestorIndex,
+boolean doCheckOwner,
+FsAction ancestorAccess,
+FsAction parentAccess,
+FsAction access,
+FsAction subAccess,
+boolean ignoreEmptyDir) {
+  this.fsOwner = fsOwner;
+  this.supergroup = 

[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi

2020-02-11 Thread GitBox
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#discussion_r377977497
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 ##
 @@ -68,6 +277,8 @@ public abstract void checkPermission(String fsOwner, String 
supergroup,
 boolean ignoreEmptyDir)
 throws AccessControlException;
 
+void checkPermissionWithContext(AuthorizationContext authzContext)
 
 Review comment:
   NIT: can you add javadoc for the new public method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16836) Bug in widely-used helper function caused valid configuration value to fail on multiple tests, causing build failure

2020-02-11 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034938#comment-17034938
 ] 

Ctest commented on HADOOP-16836:


Hi, what do you think of this issue?

> Bug in widely-used helper function caused valid configuration value to fail 
> on multiple tests, causing build failure
> 
>
> Key: HADOOP-16836
> URL: https://issues.apache.org/jira/browse/HADOOP-16836
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Blocker
>  Labels: configuration, easyfix, patch, test
> Attachments: HADOOP-16836-000.patch, HADOOP-16836-000.patch
>
>
> Test helper function 
> `org.apache.hadoop.io.file.tfile.TestTFileByteArrays#readRecords(org.apache.hadoop.fs.FileSystem,
>  org.apache.hadoop.fs.Path, int, org.apache.hadoop.conf.Configuration)` 
> (abbreviate as `readRecords()` below) are called in 4 actively-used tests 
> below:
>  
> {code:java}
> org.apache.hadoop.io.file.tfile.TestTFileStreams#testOneEntryMixedLengths1
> org.apache.hadoop.io.file.tfile.TestTFileStreams#testOneEntryUnknownLength
> org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams#testOneEntryMixedLengths1
> org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams#testOneEntryUnknownLength{code}
>  
> These tests first call 
> `org.apache.hadoop.io.file.tfile.TestTFileStreams#writeRecords(int count, 
> boolean knownKeyLength, boolean knownValueLength, boolean close)` to write 
> `key-value` pair records in a `TFile` object, then call the helper function 
> `readRecords()` to assert the `key` part and the `value` part of `key-value` 
> pair records stored matched with what they wrote perviously. The `value` 
> parts of `key-value` pairs from these tests are hardcode strings with a 
> length of 6.
> Assertions in `readRecords()` are directly related to the value of the 
> configuration parameter `tfile.io.chunk.size`. The formal definition of 
> `tfile.io.chunk.size` is "Value chunk size in bytes. Default to 1MB. Values 
> of the length less than the chunk size is guaranteed to have known value 
> length in read time (See also 
> TFile.Reader.Scanner.Entry.isValueLengthKnown())".
> When `tfile.io.chunk.size` is configured to a value less than the length of 
> the `value` part of the `key-value` pairs from these 4 tests, these tests 
> will fail, even though the configured value for `tfile.io.chunk.size` is 
> correct in semantic.
>  
> *Consequence*
> At least 4 actively-used tests failed on correctly configured parameters. 
> Tests used `readRecords()` could fail if the length of the hardcoded `value` 
> part they tested is larger than the configured value of 
> `tfile.io.chunk.size`. This caused build failure of Hadoop-Common if these 
> tests are not skipped.
>  
> *Root Cause*
> `readRecords()` used 
> `org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry#getValueLength()` 
> (abbreviate as `getValueLength()` below) to get the full length of the 
> `value` part in the `key-value` pair. But `getValueLength()` can only get the 
> full length of the `value` part when the full length is less than 
> `tfile.io.chunk.size`, otherwise, `getValueLength()` throws an exception, 
> causing `readRecords()` to fail, and thus resulting in failures in the 
> aforementioned 4 tests. This is because `getValueLength()` do not know the 
> full length of the `value` part when `value` part's size is larger than 
> `tfile.io.chunk.size`.
>  
> *Fixes*
> `readRecords()` should instead call 
> `org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry#getValue(byte[])` 
> (abbreviate as `getValue()` below), which returns the correct full length of 
> the `value` part despite whether the `value` length is larger than 
> `tfile.io.chunk.size`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread GitBox
steveloughran commented on issue #1840: HADOOP-16853. 
ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
URL: https://github.com/apache/hadoop/pull/1840#issuecomment-584893697
 
 
   @bgaborg @mukund-thakur @liuml07  -anyone fancy a quick look @ this? 
   tested the codepaths with s3guard on and the bucket with/without version 
checking. (joy, a test which depends on S3 bucket settings for full coverage)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584831964
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 17s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 2732dfab6c4e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e637797 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/testReport/ |
   | Max. process+thread count | 425 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-11 Thread GitBox
DadanielZ commented on a change in pull request #1790: [HADOOP-16818] ABFS: 
Combine append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#discussion_r377860636
 
 

 ##
 File path: hadoop-tools/hadoop-azure/pom.xml
 ##
 @@ -675,6 +679,7 @@
   
 
   
+-->
 
 Review comment:
   It looks like there is no need to update this pom file, if that is true can 
you reset the changes in this pom?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-11 Thread GitBox
DadanielZ commented on a change in pull request #1790: [HADOOP-16818] ABFS: 
Combine append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#discussion_r377860636
 
 

 ##
 File path: hadoop-tools/hadoop-azure/pom.xml
 ##
 @@ -675,6 +679,7 @@
   
 
   
+-->
 
 Review comment:
   It looks like there is no need to update this pom file, if that is true can 
you reset the changes in this pom?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1840: HADOOP-16853. 
ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
URL: https://github.com/apache/hadoop/pull/1840#issuecomment-584811497
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 26s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  20m 47s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  84m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1840/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1840 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bef3526a60dc 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e637797 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1840/1/testReport/ |
   | Max. process+thread count | 393 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1840/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur edited a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur edited a comment on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584804743
 
 
   > if closing the `fs` value triggers failures in superclass cleanup, then 
you are sharing an FS instance between test cases. (i.e you are actually 
picking up the last one created). 
   That is fixed now. That was a mistake from my side. Closing "fs" is not 
causing any problem in superclass cleanup now. 
   
   One other thing to notice here is there is only one test case where the 'fs' 
is actually created. All others are just failure scenarios.
   
   > If you disable caching you should get a new one, which you can then close 
safely
   
   Already disabled file system caching.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584804743
 
 
   
   > if closing the `fs` value triggers failures in superclass cleanup, then 
you are sharing an FS instance between test cases. (i.e you are actually 
picking up the last one created). 
   That is fixed now. That was a mistake from my side. Closing "fs" is not 
causing any problem in superclass cleanup now. 
   One other thing to notice here is there is only one test case where the 'fs' 
is actually created. All others are just failure scenarios.
   
   > If you disable caching you should get a new one, which you can then close 
safely
   Already disabled file system caching.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1823: HADOOP-16794 S3 
Encryption keys not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#discussion_r377842035
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java
 ##
 @@ -107,10 +106,15 @@ public void testEncryptionOverRename() throws Throwable {
 validateEncrytionSecrets(secrets);
 writeDataset(fs, src, data, data.length, 1024 * 1024, true);
 ContractTestUtils.verifyFileContents(fs, src, data);
-Path dest = path(src.getName() + "-copy");
-fs.rename(src, dest);
-ContractTestUtils.verifyFileContents(fs, dest, data);
-assertEncrypted(dest);
+Path targetDir = path("target");
 
 Review comment:
   bq. The reason dest file has to be created is to enforce rename to consider 
targetDir as a directory else it considers it as file.
   
   mkdir(targetDir) should have done that. Or is it not because of that funny 
"rename into empty dir" problem with rename() which everyone hates (historical 
mistake, BTW)
   
   If somehow that doesn't work and you want to create a file, 
ContractTestUtils.touch() will do that; add a comment above about why its needed
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584801005
 
 
   OK, production code all LGTM; just that test tuning
   
   if closing the `fs` value triggers failures in superclass cleanup, then you 
are sharing an FS instance between test cases. (i.e you are actually picking up 
the last one created). If you disable caching you should get a new one, which 
you can then close safely


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16852) ABFS: Send error back to client for Read Ahead request failure

2020-02-11 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-16852:
---
Description: 
Issue seen by a customer:

The failed requests we were seeing in the AbfsClient logging actually never 
made it out over the wire. We have found that there’s an issue with ADLS 
passthrough and the 8 read ahead threads that ADLSv2 spawns in 
ReadBufferManager.java. We depend on thread local storage in order to get the 
right JWT token and those threads do not have the right information in their 
thread local storage. Thus, when they pick up a task from the read ahead queue 
they fail by throwing an AzureCredentialNotFoundException exception in 
AbfsRestOperation.executeHttpOperation() where it calls 
client.getAccessToken(). This exception is silently swallowed by the read ahead 
threads in ReadBufferWorker.run(). As a result, every read ahead attempt 
results in a failed executeHttpOperation(), but still calls 
AbfsClientThrottlingIntercept.updateMetrics() and contributes to throttling 
(despite not making it out over the wire). After the read aheads fail, the main 
task thread performs the read with the right thread local storage information 
and succeeds, but first sleeps for up to 10 seconds due to the throttling.

> ABFS: Send error back to client for Read Ahead request failure
> --
>
> Key: HADOOP-16852
> URL: https://issues.apache.org/jira/browse/HADOOP-16852
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Issue seen by a customer:
> The failed requests we were seeing in the AbfsClient logging actually never 
> made it out over the wire. We have found that there’s an issue with ADLS 
> passthrough and the 8 read ahead threads that ADLSv2 spawns in 
> ReadBufferManager.java. We depend on thread local storage in order to get the 
> right JWT token and those threads do not have the right information in their 
> thread local storage. Thus, when they pick up a task from the read ahead 
> queue they fail by throwing an AzureCredentialNotFoundException exception in 
> AbfsRestOperation.executeHttpOperation() where it calls 
> client.getAccessToken(). This exception is silently swallowed by the read 
> ahead threads in ReadBufferWorker.run(). As a result, every read ahead 
> attempt results in a failed executeHttpOperation(), but still calls 
> AbfsClientThrottlingIntercept.updateMetrics() and contributes to throttling 
> (despite not making it out over the wire). After the read aheads fail, the 
> main task thread performs the read with the right thread local storage 
> information and succeeds, but first sleeps for up to 10 seconds due to the 
> throttling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377838398
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
+}
+assertNotNull("FileSystem should have been initialized", fs);
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
 
 Review comment:
   potentially  brittle, but we can deal with that when the text changes. We 
have found in the past hat any test coded to look for AWS error messages is 
brittle against SDK. 
   
   Lets just go with this and when things break, catch up


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377833699
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  throw ex;
+}
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+"No such file or directory: " + path,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+"The specified bucket does not exist",
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 1);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingV2() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 2);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingParameterValidation() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 3);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+configuration.setInt(S3A_BUCKET_PROBE, -1);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Override
+  protected Configuration getConfiguration() {
+Configuration configuration = super.getConfiguration();
+S3ATestUtils.disableFilesystemCaching(configuration);
+return configuration;
+  }
+
+  @After
+  public void tearDown() throws Exception {
+IOUtils.cleanupWithLogger(getLogger(), fs);
 
 Review comment:
   Found the issue. Rather than overriding the teardown method I implemented it 
which caused the Junit to call teardown() twice thus causing all the above 
problems. 
   Sorry My Bad. :(


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377833699
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  throw ex;
+}
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+"No such file or directory: " + path,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+"The specified bucket does not exist",
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 1);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingV2() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 2);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingParameterValidation() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 3);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+configuration.setInt(S3A_BUCKET_PROBE, -1);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Override
+  protected Configuration getConfiguration() {
+Configuration configuration = super.getConfiguration();
+S3ATestUtils.disableFilesystemCaching(configuration);
+return configuration;
+  }
+
+  @After
+  public void tearDown() throws Exception {
+IOUtils.cleanupWithLogger(getLogger(), fs);
 
 Review comment:
   Found the issue. Rather than overriding the teardown method I implemented it 
which caused the Junit to call teardown() twice thus cause all the above 
problems. 
   Sorry My Bad. :(


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377829741
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  throw ex;
+}
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+"No such file or directory: " + path,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+"The specified bucket does not exist",
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 1);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingV2() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 2);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingParameterValidation() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 3);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+configuration.setInt(S3A_BUCKET_PROBE, -1);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Override
+  protected Configuration getConfiguration() {
+Configuration configuration = super.getConfiguration();
+S3ATestUtils.disableFilesystemCaching(configuration);
+return configuration;
+  }
+
+  @After
+  public void tearDown() throws Exception {
+IOUtils.cleanupWithLogger(getLogger(), fs);
 
 Review comment:
   My tests are running new FS instance only. I confirmed that using the IDE 
debugger. I think what is happening is, we are calling fs.close() twice one 
with the shared instance and other on my private instance which is stopping all 
the services for a particular Fs leading to mismatch.


This is an automated message from the Apache 

[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377827793
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   yes. This is all just information for people looking at the code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16853) ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16853 started by Steve Loughran.
---
> ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
> ---
>
> Key: HADOOP-16853
> URL: https://issues.apache.org/jira/browse/HADOOP-16853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete[auth=true]
> failing because the deleted file can still be read when the s3guard entry has 
> the versionId.
> Proposed: if the FS is versioned and the file status has versionID then we 
> switch to tests which assert the file is readable, rather than tests which 
> assert it isn't there



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1840: HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread GitBox
steveloughran opened a new pull request #1840: HADOOP-16853. 
ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
URL: https://github.com/apache/hadoop/pull/1840
 
 
   
   Tested s3 ireland with/without change.detection.source = versionid
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16853) ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034665#comment-17034665
 ] 

Steve Loughran commented on HADOOP-16853:
-

This only surfaces if your bucket is versioned and the FS client set to use 
version ID when opening files; this is why I missed it before

> ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
> ---
>
> Key: HADOOP-16853
> URL: https://issues.apache.org/jira/browse/HADOOP-16853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete[auth=true]
> failing because the deleted file can still be read when the s3guard entry has 
> the versionId.
> Proposed: if the FS is versioned and the file status has versionID then we 
> switch to tests which assert the file is readable, rather than tests which 
> assert it isn't there



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16853) ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16853:
---

 Summary: ITestS3GuardOutOfBandOperations failing on versioned S3 
buckets
 Key: HADOOP-16853
 URL: https://issues.apache.org/jira/browse/HADOOP-16853
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete[auth=true]

failing because the deleted file can still be read when the s3guard entry has 
the versionId.

Proposed: if the FS is versioned and the file status has versionID then we 
switch to tests which assert the file is readable, rather than tests which 
assert it isn't there




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16853) ITestS3GuardOutOfBandOperations failing on versioned S3 buckets

2020-02-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034664#comment-17034664
 ] 

Steve Loughran commented on HADOOP-16853:
-


{code}
[ERROR] Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
120.629 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations
[ERROR] 
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)
  Time elapsed: 3.56 s  <<< FAILURE!
java.lang.AssertionError: Expected a java.io.FileNotFoundException to be 
thrown, but got the result: : 16
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.expectExceptionWhenReading(ITestS3GuardOutOfBandOperations.java:1004)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.deleteFileInListing(ITestS3GuardOutOfBandOperations.java:974)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete(ITestS3GuardOutOfBandOperations.java:311)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

{code}

> ITestS3GuardOutOfBandOperations failing on versioned S3 buckets
> ---
>
> Key: HADOOP-16853
> URL: https://issues.apache.org/jira/browse/HADOOP-16853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations.testListingDelete[auth=true]
> failing because the deleted file can still be read when the s3guard entry has 
> the versionId.
> Proposed: if the FS is versioned and the file status has versionID then we 
> switch to tests which assert the file is readable, rather than tests which 
> assert it isn't there



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377796949
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  throw ex;
+}
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+"No such file or directory: " + path,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+"The specified bucket does not exist",
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 1);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingV2() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 2);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingParameterValidation() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 3);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+configuration.setInt(S3A_BUCKET_PROBE, -1);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Override
+  protected Configuration getConfiguration() {
+Configuration configuration = super.getConfiguration();
+S3ATestUtils.disableFilesystemCaching(configuration);
+return configuration;
+  }
+
+  @After
+  public void tearDown() throws Exception {
+IOUtils.cleanupWithLogger(getLogger(), fs);
 
 Review comment:
   ok. So what's happening then is that your tests are picking up a shared FS 
instance, not the one you've just configured with different bucket init 
settings. your tests aren't doing what you think they are


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL 

[jira] [Created] (HADOOP-16852) ABFS: Send error back to client for Read Ahead request failure

2020-02-11 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16852:
--

 Summary: ABFS: Send error back to client for Read Ahead request 
failure
 Key: HADOOP-16852
 URL: https://issues.apache.org/jira/browse/HADOOP-16852
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.1
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584636545
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 54s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 70be778f4667 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cc8ae59 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377796308
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
 ##
 @@ -608,3 +608,27 @@ with HADOOP-15669.
 
 Other options may be added to `fs.s3a.ssl.channel.mode` in the future as
 further SSL optimizations are made.
+
+## Tuning S3AFileSystem Initialization.
+Any client using S3AFileSystem has to initialize it by providing a S3 bucket
+and configuration.  The init method checks if the bucket provided is valid
+or not which is a slow operation leading poor performance. We can ignore
+bucket validation by configuring `fs.s3a.bucket.probe` as follows:
+
+```xml
+
+  fs.s3a.bucket.probe
+  0
+  
+ The value can be 0, 1 or 2(default). When set to 0, bucket existence
+ check won't be done during initialization thus making it faster.
+ Though it should be noted that if bucket is not available in S3,
 
 Review comment:
   no, that's fine


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584758279
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  6s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 54s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 17s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 34s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 27s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 40s |  root: The patch generated 6 new 
+ 75 unchanged - 2 fixed = 81 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 2 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 22s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 30s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 124m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 14c16a167728 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cc8ae59 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/testReport/ |
   | Max. process+thread count | 1448 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/10/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377659612
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   After reading some documentation, I understood what you mean to say here. So 
I have added @Retries.RetryTranslated annotation there. That means since the 
underlying methods are retried and translated, the caller shouldn't retry or 
translate again. Please correct me if I am wrong. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584694342
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 40s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  62m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cf0667027565 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cc8ae59 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584323280
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 55s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 12s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 52s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 12s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 28s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 133m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux c016059eec4a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/testReport/ |
   | Max. process+thread count | 1383 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-583367137
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  9s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 25s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 41s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 35s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 57s |  trunk passed  |
   | -0 :warning: |  patch  |   2m 22s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  2s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 42s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 46s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 15s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 36s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 127m 27s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux fb3047deee9a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7dac7e1 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/testReport/ |
   | Max. process+thread count | 1375 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584690703
 
 
   Did checkstyle changes and a diff with trunk to (a) reduce the diff and (b) 
see what I needed to improve with javadocs; mainly the RetryingCollection.
   
   I got a failure on a -Dscale auth run
   
   ```
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:257->Assert.assertTrue:41->Assert.fail:88
 files mismatch: between 
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-1"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-25"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-16"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-11"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-7"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-54"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-14"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-35"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-48"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-56"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-29"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-52"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-40"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-2"
 
"s3a://hwdev-steve-ireland-new/fork-0003/test/testBulkRenameAndDelete/src/file-24"
 "s3a:
   ```
   
   Now, I've been playing with older branch-2 versions recently, and could 
blame that -but "bulk" and "delete" describe exactly what I was working on in 
this patch.
   
   It wasn't, but while working on this tests, with better renames, I managed 
to create a deadlock in the new code
   
   1. S3ABlockOutputStream was waiting for space in the bounded thread pool so 
it can do an async put.
   1. But that thread pool was blocked by threads waiting for their async 
directory operations to complete.
   1. Outcome: total deadlock.
   
   Surfaced in ITestS3ADeleteManyFiles during parallel file creation.
   
   Actions
   * remove the async stuff from the end of rename()
   * keep dir marker delete operations in finishedWrite() async, but use the 
unbounded thread pool.
   * Cleanup + enhancement of ITestS3ADeleteManyFiles so that it tests src and 
dest paths more rigorously, and sets a page size of 50 for better coverage of 
the paged rename sequence.
   
   Makes me think we should do more parallel IO tests within the same process.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1823: HADOOP-16794 S3 
Encryption keys not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#discussion_r377693355
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java
 ##
 @@ -107,10 +106,15 @@ public void testEncryptionOverRename() throws Throwable {
 validateEncrytionSecrets(secrets);
 writeDataset(fs, src, data, data.length, 1024 * 1024, true);
 ContractTestUtils.verifyFileContents(fs, src, data);
-Path dest = path(src.getName() + "-copy");
-fs.rename(src, dest);
-ContractTestUtils.verifyFileContents(fs, dest, data);
-assertEncrypted(dest);
+Path targetDir = path("target");
 
 Review comment:
   
   
   > 6. dest file is completely ignored
   > 
   The reason dest file has to be created is to enforce rename to consider 
targetDir as a directory else it considers it as file. 
   
   > I need some clarification here.
   > 
   > * why the change
   >
   This change was done to address one of your above comment.
   "Maybe: in testEncryptionOverRename , rename the file into a directory." 
   
   > * before the encryption settings were changed in copy, how did this new 
test fail?
   > 
   The encryption key of the destination file  targetDir/src was not matching 
with the configured kms key of the bucket rather it was equal to some default 
key generated by the S3 itself.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377659612
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   After reading some documentation, I understood what you mean to say here. So 
I have added @Retries.RetryTranslated annotation there. That means since the 
underling methods are retried and translated, the called shouldn't retry or 
translate again. Please correct me if I am wrong. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584636545
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 54s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  64m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 70be778f4667 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cc8ae59 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034397#comment-17034397
 ] 

Hudson commented on HADOOP-16851:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17939 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17939/])
HADOOP-16851. Removed unused import in Configuration (github: rev 
cc8ae591049aff8d477fc372cee2a878f80c2b02)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.1
>Reporter: runzhou wu
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.3.0
>
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377594483
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.After;
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.IOUtils;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  throw ex;
+}
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+"No such file or directory: " + path,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+"The specified bucket does not exist",
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 1);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingV2() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 2);
+intercept(FileNotFoundException.class,
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Test
+  public void testBucketProbingParameterValidation() throws Exception {
+Configuration configuration = getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 3);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+configuration.setInt(S3A_BUCKET_PROBE, -1);
+intercept(IllegalArgumentException.class,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2",
+"Should throw IllegalArgumentException",
+() -> FileSystem.get(uri, configuration));
+  }
+
+  @Override
+  protected Configuration getConfiguration() {
+Configuration configuration = super.getConfiguration();
+S3ATestUtils.disableFilesystemCaching(configuration);
+return configuration;
+  }
+
+  @After
+  public void tearDown() throws Exception {
+IOUtils.cleanupWithLogger(getLogger(), fs);
 
 Review comment:
   Adding this extra cleanup is throwing FileSystem is closed! because of this 
call AbstractFSContractTestBase.deleteTestDirInTeardown() in the super class 
teardown after each test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377593684
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
 ##
 @@ -608,3 +608,27 @@ with HADOOP-15669.
 
 Other options may be added to `fs.s3a.ssl.channel.mode` in the future as
 further SSL optimizations are made.
+
+## Tuning S3AFileSystem Initialization.
+Any client using S3AFileSystem has to initialize it by providing a S3 bucket
+and configuration.  The init method checks if the bucket provided is valid
+or not which is a slow operation leading poor performance. We can ignore
+bucket validation by configuring `fs.s3a.bucket.probe` as follows:
+
+```xml
+
+  fs.s3a.bucket.probe
+  0
+  
+ The value can be 0, 1 or 2(default). When set to 0, bucket existence
+ check won't be done during initialization thus making it faster.
+ Though it should be noted that if bucket is not available in S3,
 
 Review comment:
   Doc changes already present similar to you said. Do you want me to tweak 
this here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377593111
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
+}
+assertNotNull("FileSystem should have been initialized", fs);
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
 
 Review comment:
   added the text in contained for verification.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377590167
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
 
 Review comment:
   I was worried about this but somehow new conf settings  were getting picked 
up. Need to figure out how. Anyway I have disabled the FilesystemCaching such 
that we don't see intermittent failures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#issuecomment-578686362
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  8s |  https://github.com/apache/hadoop/pull/1790 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1790 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#issuecomment-583353780
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 13s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 14s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 14s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 11s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 16s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 16s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 16s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 15s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 73f05f3de826 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fafe78f |
   | Default Java | 1.8.0_232 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/testReport/ |
   | Max. process+thread count | 311 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine 
append+flush calls for blockblob & appendblob
URL: https://github.com/apache/hadoop/pull/1790#issuecomment-578722809
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  27m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 36s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 50s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 49s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 15s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javac  |   0m 15s |  hadoop-azure in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 13s |  The patch fails to run 
checkstyle in hadoop-azure  |
   | -1 :x: |  mvnsite  |   0m 17s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 6 line(s) with tabs.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 20s |  hadoop-azure in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 19s |  hadoop-azure in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 19s |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  80m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 8c0ffd23132b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7f40e66 |
   | Default Java | 1.8.0_232 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/whitespace-tabs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/testReport/ |
   | Max. process+thread count | 418 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[GitHub] [hadoop] steveloughran closed pull request #1821: HADOOP-16825: Checkaccess testcase fix

2020-02-11 Thread GitBox
steveloughran closed pull request #1821: HADOOP-16825: Checkaccess testcase fix
URL: https://github.com/apache/hadoop/pull/1821
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16851:
---

Assignee: Jan Hentschel

> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: runzhou wu
>Assignee: Jan Hentschel
>Priority: Trivial
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16851:

Affects Version/s: 3.2.1

> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: runzhou wu
>Assignee: Jan Hentschel
>Priority: Trivial
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16851:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.1
>Reporter: runzhou wu
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: 3.3.0
>
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16851:

Component/s: conf

> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.1
>Reporter: runzhou wu
>Assignee: Jan Hentschel
>Priority: Trivial
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1837: HADOOP-16851. Removed unused import in Configuration

2020-02-11 Thread GitBox
steveloughran merged pull request #1837: HADOOP-16851. Removed unused import in 
Configuration
URL: https://github.com/apache/hadoop/pull/1837
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1837: HADOOP-16851. Removed unused import in Configuration

2020-02-11 Thread GitBox
steveloughran commented on issue #1837: HADOOP-16851. Removed unused import in 
Configuration
URL: https://github.com/apache/hadoop/pull/1837#issuecomment-584599251
 
 
   +1 -thanks for this. Housekeeping is always nice


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16851) unused import in Configuration class

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16851:
---

 Key: HADOOP-16851  (was: YARN-9696)
Assignee: (was: Jan Hentschel)
 Project: Hadoop Common  (was: Hadoop YARN)

> unused import in Configuration class
> 
>
> Key: HADOOP-16851
> URL: https://issues.apache.org/jira/browse/HADOOP-16851
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: runzhou wu
>Priority: Trivial
>
> LinkedList is not used .
> it is in line 54. the content is "import java.util.LinkedList; " .i think it 
> can be delete.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16847) Test TestGroupsCaching fail if HashSet iterates in a different order

2020-02-11 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034365#comment-17034365
 ] 

Hudson commented on HADOOP-16847:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17937 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17937/])
HADOOP-16847. Test can fail if HashSet iterates in a different order. (github: 
rev d36cd37e606e03b4c2b7b25b9155fc5ec5dc379d)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java


> Test TestGroupsCaching fail if HashSet iterates in a different order
> 
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
steveloughran commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584595196
 
 
   style
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:278:
   public static final String BULK_DELETE_PAGE_SIZE =: 'member def modifier' 
has incorrect indentation level 3, expected level should be 2. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:279:
  "fs.s3a.bulk.delete.page.size";: '"fs.s3a.bulk.delete.page.size"' has 
incorrect indentation level 6, expected level should be 7. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1965:
   * with the counter set to the number of keys, rather than the number of 
invocations: Line is longer than 80 characters (found 86). [LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1967:
   * This is because S3 considers each key as one mutating operation on the 
store: Line is longer than 80 characters (found 81). [LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java:50:import
 static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion;:15: 
Unused import - 
org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion. 
[UnusedImports]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:206:
   * @return true if the DDB table has prepaid IO and is small enough to 
throttle.: Line is longer than 80 characters (found 82). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java:518:
  public void test_999_delete_all_entries() throws Throwable {:15: Name 
'test_999_delete_all_entries' must match pattern '^[a-z][a-zA-Z0-9]*$'. 
[MethodName]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ThrottleTracker.java:113:
  LOG.warn("No throttling detected in {} against {}", this, 
ddbms.toString());: Line is longer than 80 characters (found 82). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:117:
  @Parameterized.Parameters(name = 
"bulk-delete-client-retry={0}-requests={2}-size={1}"): Line is longer than 80 
characters (found 88). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:184:
  public void test_010_Reset() throws Throwable {:15: Name 'test_010_Reset' 
must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:189:
  public void test_020_DeleteThrottling() throws Throwable {:15: Name 
'test_020_DeleteThrottling' must match pattern '^[a-z][a-zA-Z0-9]*$'. 
[MethodName]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ILoadTestS3ABulkDeleteThrottling.java:202:
  public void test_030_Sleep() throws Throwable {:15: Name 'test_030_Sleep' 
must match pattern '^[a-z][a-zA-Z0-9]*$'. [MethodName]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584280267
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 57s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 55s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 39s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 44s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 23s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 153m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux dc89e8d171fe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/testReport/ |
   | Max. process+thread count | 1597 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Updated] (HADOOP-16847) Test TestGroupsCaching fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16847:

Summary: Test TestGroupsCaching fail if HashSet iterates in a different 
order  (was: Test can fail if HashSet iterates in a different order)

> Test TestGroupsCaching fail if HashSet iterates in a different order
> 
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test TestGroupsCaching fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16847:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Test TestGroupsCaching fail if HashSet iterates in a different order
> 
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034355#comment-17034355
 ] 

Steve Loughran commented on HADOOP-16847:
-

+1; merged to 3.2+.

I Forgot to say which test failed in the commit. Sorry

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16847:

Fix Version/s: 3.2.2
   3.3.0

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16847:

Component/s: (was: security)
 test

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16847) Test can fail if HashSet iterates in a different order

2020-02-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16847:
---

Assignee: testfixer0

> Test can fail if HashSet iterates in a different order
> --
>
> Key: HADOOP-16847
> URL: https://issues.apache.org/jira/browse/HADOOP-16847
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.1
>Reporter: testfixer0
>Assignee: testfixer0
>Priority: Minor
> Attachments: HADOOP-16847-000.patch, HADOOP-16847-000.patch, 
> HADOOP-16874-001.patch
>
>
> The test `testNegativeGroupCaching` can fail if the iteration order of 
> HashSet changes. In detail, the method `assertEquals` (line 331) compares 
> `groups.getGroups(user)` with an ArrayList `myGroups`. The method `getGroups` 
> converts `allGroups` (a HashSet) to a list and it calls iterator in HashSet. 
> However, the iteration is non-deterministic.
> This PR proposes to modify HashSet to LinkedHashSet for a deterministic order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #1835: HADOOP-16847. Test can fail if HashSet iterates in a different order

2020-02-11 Thread GitBox
steveloughran merged pull request #1835: HADOOP-16847. Test can fail if HashSet 
iterates in a different order
URL: https://github.com/apache/hadoop/pull/1835
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1835: HADOOP-16847. Test can fail if HashSet iterates in a different order

2020-02-11 Thread GitBox
hadoop-yetus removed a comment on issue #1835: HADOOP-16847. Test can fail if 
HashSet iterates in a different order
URL: https://github.com/apache/hadoop/pull/1835#issuecomment-583365773
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 26s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 32s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m  6s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 43s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 122m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1835 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bdd9d1487aa6 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7dac7e1 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/testReport/ |
   | Max. process+thread count | 1384 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1835/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1831: Hadoop 16845: ITestAbfsClient.testContinuationTokenHavingEqualSign failing

2020-02-11 Thread GitBox
steveloughran closed pull request #1831: Hadoop 16845: 
ITestAbfsClient.testContinuationTokenHavingEqualSign failing
URL: https://github.com/apache/hadoop/pull/1831
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1831: Hadoop 16845: ITestAbfsClient.testContinuationTokenHavingEqualSign failing

2020-02-11 Thread GitBox
steveloughran commented on issue #1831: Hadoop 16845: 
ITestAbfsClient.testContinuationTokenHavingEqualSign failing
URL: https://github.com/apache/hadoop/pull/1831#issuecomment-584588357
 
 
   closing this as @ThomasMarquardt has merged it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1823: HADOOP-16794 S3 
Encryption keys not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#discussion_r377571443
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java
 ##
 @@ -107,10 +106,15 @@ public void testEncryptionOverRename() throws Throwable {
 validateEncrytionSecrets(secrets);
 writeDataset(fs, src, data, data.length, 1024 * 1024, true);
 ContractTestUtils.verifyFileContents(fs, src, data);
-Path dest = path(src.getName() + "-copy");
-fs.rename(src, dest);
-ContractTestUtils.verifyFileContents(fs, dest, data);
-assertEncrypted(dest);
+Path targetDir = path("target");
 
 Review comment:
   I am looking at this, trying to understand what it is doing.
   
   Before: we created a file src, renamed it to dest and verified the contents 
were unchanged; dest encrypted.
   
   After: 
   1. src is created as a dataset
   1. new path targetDir created
   1. file `dest` is created in a target/src+"-another" with a different 
dataset; contents verified
   1. rename(src, targetDir) to create the file targetDir/src
   1. which is verified
   1. dest file is completely ignored
   
   So why the change here? I don't see why we need the new test file, and the 
only change is now that you're renaming into a subdirectory  which already 
exists rather than a path of the destination file.
   
   I need some clarification here.
   * why the change
   * before the encryption settings were changed in copy, how did this new test 
fail?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377555178
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
 
 Review comment:
   actually, given it's non-static it will be unique to each test case. You can 
just override the `teardown()` method and add an `IOUtilcleanupWithLogger(LOG, 
fs)` & so close the fs variable robustly if it is set. Do call the superclass 
after.
   
   FWIW, the S3A base test suite already retrieves an FS instance for each test 
case, so you can pick that up, it's just a bit fiddlier to configure. Don't' 
worry about it here, but you will eventually have to learn your way around that 
test code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377552989
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   you just need to add the @RetryPolicy on the method based on the inner ones. 
it's not about actually doing the retries, just declare what it is for people 
looking at it.
   
   The goal is that if we can keep those attributes accurate you just need to 
look at the method and determine the complete retrial policy that it has -all 
the way down.
   
   This means we need to add the attribute to all methods, and keep an eye on 
them to make sure that they don't go invalid/out of date after changes 
underneath.
   
   It's a shame we can't automate this -but the need to have them does force us 
to audit the code. Like you say: we mustn't retry around a retry -but we must 
have a retry somewhere above every non-retried operation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance

2020-02-11 Thread Tao Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated HADOOP-16850:
--
Description: 
Recently we found jmx request taken almost 5s+ to be done when there were 1w+ 
threads in a stressed datanode process, meanwhile other http requests were 
blocked and some disk operations were affected (we can see many "Slow 
manageWriterOsCache" messages in DN log, and these messages were hard to be 
seen again after we stopped sending jxm requests)

The excessive time is spent in getting thread info via ThreadMXBean inside 
which ThreadImpl#getThreadInfo native method is called, the time complexity of 
ThreadImpl#getThreadInfo is O(n*n) according to 
[JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds 
global thread lock and prevents creation or termination of threads.

To improve this, I propose to support getting thread info from thread group 
which will improve a lot by default, also support using original approach when 
"-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the startup 
command.

An example of performance tests between these two approaches is as follows:
{noformat}
#Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5
#Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9
#Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15
#Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42
#Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92
#Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170
{noformat}

  was:
Recently we found jmx request taken almost 5s+ to be done when there were 1w+ 
threads in a stressed datanode process, meanwhile other http requests were 
blocked and some disk operations were affected (we can see many "Slow 
manageWriterOsCache" messages in DN log, and these messages were hard to be 
seen again after we stopped sending jxm requests)

The excessive time is spent in getting thread info via ThreadMXBean inside 
which ThreadImpl#getThreadInfo native method is called, the time complexity of 
ThreadImpl#getThreadInfo is O(n*n) according to JDK-8185005 and it may held 
global thread lock (prevent creation or termination of threads) for a long time.

To improve this, I propose to support getting thread info from thread group 
which will improve a lot by default, also support using original approach when 
"-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the startup 
command.

An example of performance tests between these two approaches is as follows:
{noformat}
#Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5
#Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9
#Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15
#Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42
#Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92
#Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170
{noformat}


> Support getting thread info from thread group for JvmMetrics to improve the 
> performance
> ---
>
> Key: HADOOP-16850
> URL: https://issues.apache.org/jira/browse/HADOOP-16850
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1
>Reporter: Tao Yang
>Priority: Major
>
> Recently we found jmx request taken almost 5s+ to be done when there were 1w+ 
> threads in a stressed datanode process, meanwhile other http requests were 
> blocked and some disk operations were affected (we can see many "Slow 
> manageWriterOsCache" messages in DN log, and these messages were hard to be 
> seen again after we stopped sending jxm requests)
> The excessive time is spent in getting thread info via ThreadMXBean inside 
> which ThreadImpl#getThreadInfo native method is called, the time complexity 
> of ThreadImpl#getThreadInfo is O(n*n) according to 
> [JDK-8185005|https://bugs.openjdk.java.net/browse/JDK-8185005] and it holds 
> global thread lock and prevents creation or termination of threads.
> To improve this, I propose to support getting thread info from thread group 
> which will improve a lot by default, also support using original approach 
> when "-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the 
> startup command.
> An example of performance tests between these two approaches is as follows:
> {noformat}
> #Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5
> #Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9
> #Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15
> #Threads=1000, ThreadMXBean=9475768 ns, 

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-11 Thread GitBox
mukund-thakur commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377517531
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   verifyBucketExists() and verifyBucketExistsV2() are the methods which are 
getting called from the doBucketProbing() method and I see they are already 
using invoked which has a retry policy set to TRY_ONCE_THEN_FAIL. Do we need to 
put explicit retry in this method? 
   Also both these methods are annotated with RetryTranslated and the 
documentation of Retried say that if RetryTranslated is used, the called 
shouldn't perform another layer of retries.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org