[jira] [Work logged] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
[ https://issues.apache.org/jira/browse/HADOOP-17682?focusedWorklogId=632026=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632026 ] ASF GitHub Bot logged work on HADOOP-17682: --- Author: ASF GitHub Bot Created on: 01/Aug/21 02:48 Start Date: 01/Aug/21 02:48 Worklog Time Spent: 10m Work Description: sumangala-patki edited a comment on pull request #2975: URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923 TEST RESULTS HNS Account Location: East US 2 NonHNS Account Location: East US 2, Central US ``` HNS OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 11 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » TestTimedOut test... [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52 AppendBlob HNS-OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 9 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Failures: [ERROR] ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89 The actual value of 99 was not equal to the expected value [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166 » IO [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 76 HNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 NonHNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 ``` JIRAs to track failures: [TestAbfsClientThrottlingAnalyzer](https://issues.apache.org/jira/browse/HADOOP-17826), [DistCp test](https://issues.apache.org/jira/browse/HADOOP-17628), [testAbfsStreamOps](https://issues.apache.org/jira/browse/HADOOP-17716), [ITestAzureBlobFileSystemLease](https://issues.apache.org/jira/browse/HADOOP-17781) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 632026) Time Spent: 4h 50m (was: 4h 40m) > ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters > -- > > Key: HADOOP-17682 > URL: https://issues.apache.org/jira/browse/HADOOP-17682 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 4h 50m > Remaining Estimate: 0h > > ABFS open methods require certain information (contentLength, eTag, etc) to > to create an InputStream for the file at the given path. This information is > retrieved via a GetFileStatus request to backend. > However, client applications may often have access to the FileStatus prior to > invoking the open API. Providing this FileStatus to the driver through the > OpenFileParameters argument of openFileWithOptions() can help
[GitHub] [hadoop] sumangala-patki edited a comment on pull request #2975: HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
sumangala-patki edited a comment on pull request #2975: URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923 TEST RESULTS HNS Account Location: East US 2 NonHNS Account Location: East US 2, Central US ``` HNS OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 11 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » TestTimedOut test... [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52 AppendBlob HNS-OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 9 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Failures: [ERROR] ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89 The actual value of 99 was not equal to the expected value [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166 » IO [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 76 HNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 NonHNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 ``` JIRAs to track failures: [TestAbfsClientThrottlingAnalyzer](https://issues.apache.org/jira/browse/HADOOP-17826), [DistCp test](https://issues.apache.org/jira/browse/HADOOP-17628), [testAbfsStreamOps](https://issues.apache.org/jira/browse/HADOOP-17716), [ITestAzureBlobFileSystemLease](https://issues.apache.org/jira/browse/HADOOP-17781) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
[ https://issues.apache.org/jira/browse/HADOOP-17682?focusedWorklogId=632019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632019 ] ASF GitHub Bot logged work on HADOOP-17682: --- Author: ASF GitHub Bot Created on: 01/Aug/21 01:52 Start Date: 01/Aug/21 01:52 Worklog Time Spent: 10m Work Description: sumangala-patki edited a comment on pull request #2975: URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923 TEST RESULTS HNS Account Location: East US 2 NonHNS Account Location: East US 2, Central US ``` HNS OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 11 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » TestTimedOut test... [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52 AppendBlob HNS-OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 9 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Failures: [ERROR] ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89 The actual value of 99 was not equal to the expected value [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166 » IO [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 76 HNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 NonHNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 ``` JIRAs to track failures: [TestAbfsClientThrottlingAnalyzer](https://issues.apache.org/jira/browse/HADOOP-17826), [DistCp test](https://issues.apache.org/jira/browse/HADOOP-17628) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 632019) Time Spent: 4h 40m (was: 4.5h) > ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters > -- > > Key: HADOOP-17682 > URL: https://issues.apache.org/jira/browse/HADOOP-17682 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > > ABFS open methods require certain information (contentLength, eTag, etc) to > to create an InputStream for the file at the given path. This information is > retrieved via a GetFileStatus request to backend. > However, client applications may often have access to the FileStatus prior to > invoking the open API. Providing this FileStatus to the driver through the > OpenFileParameters argument of openFileWithOptions() can help avoid the call > to Store for FileStatus. > This PR adds
[GitHub] [hadoop] sumangala-patki edited a comment on pull request #2975: HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
sumangala-patki edited a comment on pull request #2975: URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923 TEST RESULTS HNS Account Location: East US 2 NonHNS Account Location: East US 2, Central US ``` HNS OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 11 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » TestTimedOut test... [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52 AppendBlob HNS-OAuth [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 9 is not within the expected range: [5.60, 8.40]. [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1 [ERROR] Failures: [ERROR] ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89 The actual value of 99 was not equal to the expected value [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166 » IO [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 76 HNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 NonHNS-SharedKey [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2 [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276 [ERROR] Errors: [ERROR] ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635 » TestTimedOut [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40 ``` JIRAs to track failures: [TestAbfsClientThrottlingAnalyzer](https://issues.apache.org/jira/browse/HADOOP-17826), [DistCp test](https://issues.apache.org/jira/browse/HADOOP-17628) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17826) ABFS: Transient failure of TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting
[ https://issues.apache.org/jira/browse/HADOOP-17826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumangala Patki updated HADOOP-17826: - Description: Transient failure of the below test observed for HNS OAuth, AppendBlob HNS OAuth and Non-HNS SharedKey combinations. The value denoted by "actual value" below varies across failures, and exceeds the upper limit of the expected range. _TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 10 is not within the expected range: [5.60, 8.40]._ Verified failure with client and server in the same region to rule out network issues. was: Transient failure of the below test observed for HNS OAuth, AppendBlob HNS OAuth and Non-HNS SharedKey combinations. The value denoted by "actual value" below varies across failures, and exceeds the upper limit of the expected range. _TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 10 is not within the expected range: [5.60, 8.40]._ > ABFS: Transient failure of > TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting > -- > > Key: HADOOP-17826 > URL: https://issues.apache.org/jira/browse/HADOOP-17826 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sumangala Patki >Priority: Major > > Transient failure of the below test observed for HNS OAuth, AppendBlob HNS > OAuth and Non-HNS SharedKey combinations. The value denoted by "actual value" > below varies across failures, and exceeds the upper limit of the expected > range. > _TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 > The actual value 10 is not within the expected range: [5.60, 8.40]._ > Verified failure with client and server in the same region to rule out > network issues. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17826) ABFS: Transient failure of TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting
Sumangala Patki created HADOOP-17826: Summary: ABFS: Transient failure of TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting Key: HADOOP-17826 URL: https://issues.apache.org/jira/browse/HADOOP-17826 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.4.0 Reporter: Sumangala Patki Transient failure of the below test observed for HNS OAuth, AppendBlob HNS OAuth and Non-HNS SharedKey combinations. The value denoted by "actual value" below varies across failures, and exceeds the upper limit of the expected range. _TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49 The actual value 10 is not within the expected range: [5.60, 8.40]._ -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store
[ https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=632018=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632018 ] ASF GitHub Bot logged work on HADOOP-17812: --- Author: ASF GitHub Bot Created on: 01/Aug/21 00:49 Start Date: 01/Aug/21 00:49 Worklog Time Spent: 10m Work Description: wbo4958 commented on pull request #3251: URL: https://github.com/apache/hadoop/pull/3251#issuecomment-890424706 Hi @steveloughran, I just cherry-picked the PR to branch 3.3 as you required, and I also did the integration test and uploaded the test result in the JIRA, Please refer to https://issues.apache.org/jira/browse/HADOOP-17812?focusedCommentId=17391079=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17391079 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 632018) Time Spent: 4h 20m (was: 4h 10m) > NPE in S3AInputStream read() after failure to reconnect to store > > > Key: HADOOP-17812 > URL: https://issues.apache.org/jira/browse/HADOOP-17812 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.2, 3.3.1 >Reporter: Bobby Wang >Assignee: Bobby Wang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: 3.3-branch-failsafe-report.html.gz, > failsafe-report.html.gz, s3a-test.tar.gz > > Time Spent: 4h 20m > Remaining Estimate: 0h > > when [reading from S3a > storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450], > SSLException (which extends IOException) happens, which will trigger > [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458]. > onReadFailure calls "reopen". it will first close the original > *wrappedStream* and set *wrappedStream = null*, and then it will try to > [re-get > *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184]. > But what if the previous code [obtaining > S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183] > throw exception, then "wrappedStream" will be null. > And the > [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446] > mechanism may re-execute the > [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450] > and cause NPE. > > For more details, please refer to > [https://github.com/NVIDIA/spark-rapids/issues/2915] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wbo4958 commented on pull request #3251: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store
wbo4958 commented on pull request #3251: URL: https://github.com/apache/hadoop/pull/3251#issuecomment-890424706 Hi @steveloughran, I just cherry-picked the PR to branch 3.3 as you required, and I also did the integration test and uploaded the test result in the JIRA, Please refer to https://issues.apache.org/jira/browse/HADOOP-17812?focusedCommentId=17391079=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17391079 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store
[ https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391079#comment-17391079 ] Bobby Wang commented on HADOOP-17812: - Hi [~ste...@apache.org] I just cherry-picked the patch to branch-3.3 and re-ran the integration tests. And one new test *testUnbufferMultipleReads* failed. It seems the failure test was not caused by my patch, since I can repro it even without my patch. I uploaded the result to the attachment. Please refer to [^3.3-branch-failsafe-report.html.gz] {code:java} java.lang.AssertionError: failed to read expected number of bytes from stream. This may be transient expected:<128> but was:<93> at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at org.apache.hadoop.fs.contract.AbstractContractUnbufferTest.validateFileContents(AbstractContractUnbufferTest.java:139) at org.apache.hadoop.fs.contract.AbstractContractUnbufferTest.testUnbufferMultipleReads(AbstractContractUnbufferTest.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748){code} > NPE in S3AInputStream read() after failure to reconnect to store > > > Key: HADOOP-17812 > URL: https://issues.apache.org/jira/browse/HADOOP-17812 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.2, 3.3.1 >Reporter: Bobby Wang >Assignee: Bobby Wang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: 3.3-branch-failsafe-report.html.gz, > failsafe-report.html.gz, s3a-test.tar.gz > > Time Spent: 4h 10m > Remaining Estimate: 0h > > when [reading from S3a > storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450], > SSLException (which extends IOException) happens, which will trigger > [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458]. > onReadFailure calls "reopen". it will first close the original > *wrappedStream* and set *wrappedStream = null*, and then it will try to > [re-get > *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184]. > But what if the previous code [obtaining > S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183] > throw exception, then "wrappedStream" will be null. > And the > [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446] > mechanism may re-execute the > [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450] > and cause NPE. > > For more details, please refer to > [https://github.com/NVIDIA/spark-rapids/issues/2915] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store
[ https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bobby Wang updated HADOOP-17812: Attachment: 3.3-branch-failsafe-report.html.gz > NPE in S3AInputStream read() after failure to reconnect to store > > > Key: HADOOP-17812 > URL: https://issues.apache.org/jira/browse/HADOOP-17812 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.2, 3.3.1 >Reporter: Bobby Wang >Assignee: Bobby Wang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: 3.3-branch-failsafe-report.html.gz, > failsafe-report.html.gz, s3a-test.tar.gz > > Time Spent: 4h 10m > Remaining Estimate: 0h > > when [reading from S3a > storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450], > SSLException (which extends IOException) happens, which will trigger > [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458]. > onReadFailure calls "reopen". it will first close the original > *wrappedStream* and set *wrappedStream = null*, and then it will try to > [re-get > *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184]. > But what if the previous code [obtaining > S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183] > throw exception, then "wrappedStream" will be null. > And the > [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446] > mechanism may re-execute the > [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450] > and cause NPE. > > For more details, please refer to > [https://github.com/NVIDIA/spark-rapids/issues/2915] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store
[ https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=632007=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632007 ] ASF GitHub Bot logged work on HADOOP-17812: --- Author: ASF GitHub Bot Created on: 31/Jul/21 23:01 Start Date: 31/Jul/21 23:01 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3251: URL: https://github.com/apache/hadoop/pull/3251#issuecomment-890414029 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 10m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 13s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 40s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 47s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 16s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 92m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3251 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 87ea4b4945ab 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / e13402611c5aab1ea2812c5002cb4b2661792fc8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/testReport/ | | Max. process+thread count | 521 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 632007) Time Spent: 4h 10m (was: 4h) > NPE in S3AInputStream read() after failure to reconnect to store > > > Key: HADOOP-17812 > URL: https://issues.apache.org/jira/browse/HADOOP-17812 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.2, 3.3.1 >Reporter: Bobby Wang >Assignee: Bobby Wang >Priority: Major >
[GitHub] [hadoop] hadoop-yetus commented on pull request #3251: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store
hadoop-yetus commented on pull request #3251: URL: https://github.com/apache/hadoop/pull/3251#issuecomment-890414029 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 10m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 13s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 40s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 47s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 16s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 18m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 21s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 92m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3251 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 87ea4b4945ab 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / e13402611c5aab1ea2812c5002cb4b2661792fc8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/testReport/ | | Max. process+thread count | 521 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3251/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store
[ https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=632004=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632004 ] ASF GitHub Bot logged work on HADOOP-17812: --- Author: ASF GitHub Bot Created on: 31/Jul/21 21:27 Start Date: 31/Jul/21 21:27 Worklog Time Spent: 10m Work Description: wbo4958 opened a new pull request #3251: URL: https://github.com/apache/hadoop/pull/3251 This improves error handling after multiple failures reading data -when the read fails and attempts to reconnect() also fail. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 632004) Time Spent: 4h (was: 3h 50m) > NPE in S3AInputStream read() after failure to reconnect to store > > > Key: HADOOP-17812 > URL: https://issues.apache.org/jira/browse/HADOOP-17812 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.2, 3.3.1 >Reporter: Bobby Wang >Assignee: Bobby Wang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: failsafe-report.html.gz, s3a-test.tar.gz > > Time Spent: 4h > Remaining Estimate: 0h > > when [reading from S3a > storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450], > SSLException (which extends IOException) happens, which will trigger > [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458]. > onReadFailure calls "reopen". it will first close the original > *wrappedStream* and set *wrappedStream = null*, and then it will try to > [re-get > *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184]. > But what if the previous code [obtaining > S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183] > throw exception, then "wrappedStream" will be null. > And the > [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446] > mechanism may re-execute the > [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450] > and cause NPE. > > For more details, please refer to > [https://github.com/NVIDIA/spark-rapids/issues/2915] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wbo4958 opened a new pull request #3251: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store
wbo4958 opened a new pull request #3251: URL: https://github.com/apache/hadoop/pull/3251 This improves error handling after multiple failures reading data -when the read fails and attempts to reconnect() also fail. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin closed pull request #3201: HDFS-16128: Added support for saving/loading an FS Image for fine-grain locking
xinglin closed pull request #3201: URL: https://github.com/apache/hadoop/pull/3201 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on pull request #3201: HDFS-16128: Added support for saving/loading an FS Image for fine-grain locking
xinglin commented on pull request #3201: URL: https://github.com/apache/hadoop/pull/3201#issuecomment-890403625 close it. Appreciated all the comments I received for this commit and Thanks @shvachko for committing it! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shvachko commented on pull request #3201: HDFS-16128: Added support for saving/loading an FS Image for fine-grain locking
shvachko commented on pull request #3201: URL: https://github.com/apache/hadoop/pull/3201#issuecomment-890399263 I re-committed PR to reflect latest changes. We can close this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17825) Add BuiltInGzipCompressor
[ https://issues.apache.org/jira/browse/HADOOP-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391052#comment-17391052 ] L. C. Hsieh commented on HADOOP-17825: -- Thanks [~csun] . CI reported test failure, so I will fix the failure soon. I copied the CI command and ran it locally. Looks it can trigger the unit tests locally. It can speed up the debugging. I will create an umbrella Jira for these works. > Add BuiltInGzipCompressor > - > > Key: HADOOP-17825 > URL: https://issues.apache.org/jira/browse/HADOOP-17825 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is > not loaded. So, without Hadoop native codec installed, saving SequenceFile > using GzipCodec will throw exception like "SequenceFile doesn't work with > GzipCodec without native-hadoop code!" > Same as other codecs which we migrated to using prepared packages (lz4, > snappy), it will be better if we support GzipCodec generally without Hadoop > native codec installed. Similar to BuiltInGzipDecompressor, we can use Java > Deflater to support BuiltInGzipCompressor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17825) Add BuiltInGzipCompressor
[ https://issues.apache.org/jira/browse/HADOOP-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391036#comment-17391036 ] Chao Sun commented on HADOOP-17825: --- Will take a look later. BTW shall we create an umbrella JIRA covering all the work (e.g., HADOOP-17125, HADOOP-17292, HADOOP-17464) of replacing native lib with their Java wrappers? > Add BuiltInGzipCompressor > - > > Key: HADOOP-17825 > URL: https://issues.apache.org/jira/browse/HADOOP-17825 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is > not loaded. So, without Hadoop native codec installed, saving SequenceFile > using GzipCodec will throw exception like "SequenceFile doesn't work with > GzipCodec without native-hadoop code!" > Same as other codecs which we migrated to using prepared packages (lz4, > snappy), it will be better if we support GzipCodec generally without Hadoop > native codec installed. Similar to BuiltInGzipDecompressor, we can use Java > Deflater to support BuiltInGzipCompressor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits
hadoop-yetus commented on pull request #3235: URL: https://github.com/apache/hadoop/pull/3235#issuecomment-890383695 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 0m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 26s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/19/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 140 unchanged - 0 fixed = 142 total (was 140) | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 387m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/19/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 500m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/19/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3235 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c8122b8278da 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8a62adfbaffa0c3d35b494ad0f0db90a1d7530d1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/19/testReport/ | | Max. process+thread count
[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits
hadoop-yetus commented on pull request #3235: URL: https://github.com/apache/hadoop/pull/3235#issuecomment-890382952 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 21s | | trunk passed | | +1 :green_heart: | compile | 1m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 29s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 55s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/20/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 140 unchanged - 0 fixed = 142 total (was 140) | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 381m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/20/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 490m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/20/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3235 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 212e3018a830 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8a62adfbaffa0c3d35b494ad0f0db90a1d7530d1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/20/testReport/ | | Max. process+thread count | 1976 (vs. ulimit of 5500) | | modules | C:
[GitHub] [hadoop] hadoop-yetus commented on pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits
hadoop-yetus commented on pull request #3235: URL: https://github.com/apache/hadoop/pull/3235#issuecomment-890382532 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 10s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 12s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/21/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 140 unchanged - 0 fixed = 142 total (was 140) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 402m 6s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/21/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 487m 16s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/21/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3235 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 58f5c2bc9461 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8a62adfbaffa0c3d35b494ad0f0db90a1d7530d1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results |
[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-12670: -- Fix Version/s: HADOOP-17800 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch HADOOP-17800. [~eclark] and [~hemanthboyina] thanks for your contribution. > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Fix For: HADOOP-17800 > > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch, HADOOP-12670-HADOOP-17800.002.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390996#comment-17390996 ] Brahma Reddy Battula commented on HADOOP-12670: --- [~hemanthboyina] thanks for uploading the patch. Patch lgtm. > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch, HADOOP-12670-HADOOP-17800.002.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #3209: HDFS-16129. Fixing the signature secret file misusage in HttpFS.
szilard-nemeth commented on pull request #3209: URL: https://github.com/apache/hadoop/pull/3209#issuecomment-890356871 @tomicooler The latest build looks way better than before. Could you please check if any of the UT failures are related to your patch? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?focusedWorklogId=631968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631968 ] ASF GitHub Bot logged work on HADOOP-17612: --- Author: ASF GitHub Bot Created on: 31/Jul/21 13:37 Start Date: 31/Jul/21 13:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3241: URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890350037 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 14s | | trunk passed | | +1 :green_heart: | compile | 23m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 21m 8s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 26m 32s | | trunk passed | | +1 :green_heart: | javadoc | 8m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 49m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 21s | | the patch passed | | +1 :green_heart: | compile | 23m 3s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 23m 3s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1917 unchanged - 0 fixed = 1927 total (was 1917) | | +1 :green_heart: | compile | 20m 52s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 20m 52s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 10 new + 1793 unchanged - 0 fixed = 1803 total (was 1793) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 22m 32s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 13s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 49m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 824m 27s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3241: HADOOP-17612. Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
hadoop-yetus commented on pull request #3241: URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890350037 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 14s | | trunk passed | | +1 :green_heart: | compile | 23m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 21m 8s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 26m 32s | | trunk passed | | +1 :green_heart: | javadoc | 8m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 49m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 21s | | the patch passed | | +1 :green_heart: | compile | 23m 3s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 23m 3s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1917 unchanged - 0 fixed = 1927 total (was 1917) | | +1 :green_heart: | compile | 20m 52s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 20m 52s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 10 new + 1793 unchanged - 0 fixed = 1803 total (was 1793) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 46s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 22m 32s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 13s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 49m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 824m 27s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/9/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 40s | | The patch does not generate ASF License warnings. | | | | 1201m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.fedbalance.TestDistCpProcedure | | | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-|
[jira] [Work logged] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?focusedWorklogId=631967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631967 ] ASF GitHub Bot logged work on HADOOP-17612: --- Author: ASF GitHub Bot Created on: 31/Jul/21 13:33 Start Date: 31/Jul/21 13:33 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3241: URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890349606 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 24s | | trunk passed | | +1 :green_heart: | compile | 23m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 21m 0s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 26m 27s | | trunk passed | | +1 :green_heart: | javadoc | 8m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 8s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 19s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 49m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 15s | | the patch passed | | +1 :green_heart: | compile | 23m 6s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 23m 6s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1921 unchanged - 0 fixed = 1931 total (was 1921) | | +1 :green_heart: | compile | 20m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 20m 55s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 10 new + 1797 unchanged - 0 fixed = 1807 total (was 1797) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 44s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 22m 40s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 13s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 49m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 821m 28s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3241: HADOOP-17612. Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
hadoop-yetus commented on pull request #3241: URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890349606 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 24s | | trunk passed | | +1 :green_heart: | compile | 23m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 21m 0s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 26m 27s | | trunk passed | | +1 :green_heart: | javadoc | 8m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 8s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 19s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 49m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 15s | | the patch passed | | +1 :green_heart: | compile | 23m 6s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 23m 6s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1921 unchanged - 0 fixed = 1931 total (was 1921) | | +1 :green_heart: | compile | 20m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 20m 55s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 10 new + 1797 unchanged - 0 fixed = 1807 total (was 1797) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 44s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 22m 40s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 13s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 49m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 821m 28s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/8/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 41s | | The patch does not generate ASF License warnings. | | | | 1198m 14s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.tools.dynamometer.TestDynamometerInfra | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r680347541 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java ## @@ -52,4 +63,131 @@ public static GetClusterMetricsResponse merge( } return GetClusterMetricsResponse.newInstance(tmp); } + + /** + * Merges a list of ApplicationReports grouping by ApplicationId. + * Our current policy is to merge the application reports from the reachable + * SubClusters. + * @param responses a list of ApplicationResponse to merge + * @param returnPartialResult if the merge ApplicationReports should contain + * partial result or not + * @return the merged ApplicationsResponse + */ + public static GetApplicationsResponse mergeApplications( + Collection responses, + boolean returnPartialResult){ +Map federationAM = new HashMap<>(); +Map federationUAMSum = new HashMap<>(); + +for (GetApplicationsResponse appResponse : responses){ + for (ApplicationReport appReport : appResponse.getApplicationList()){ +ApplicationId appId = appReport.getApplicationId(); +// Check if this ApplicationReport is an AM +if (appReport.getHost() != null) { + // Insert in the list of AM + federationAM.put(appId, appReport); + // Check if there are any UAM found before + if (federationUAMSum.containsKey(appId)) { +// Merge the current AM with the found UAM +mergeAMWithUAM(appReport, federationUAMSum.get(appId)); +// Remove the sum of the UAMs +federationUAMSum.remove(appId); + } + // This ApplicationReport is an UAM +} else if (federationAM.containsKey(appId)) { + // Merge the current UAM with its own AM + mergeAMWithUAM(federationAM.get(appId), appReport); +} else if (federationUAMSum.containsKey(appId)) { + // Merge the current UAM with its own UAM and update the list of UAM + ApplicationReport mergedUAMReport = + mergeUAMWithUAM(federationUAMSum.get(appId), appReport); + federationUAMSum.put(appId, mergedUAMReport); +} else { + // Insert in the list of UAM + federationUAMSum.put(appId, appReport); +} + } +} +// Check the remaining UAMs are depending or not from federation +for (ApplicationReport appReport : federationUAMSum.values()) { + if (mergeUamToReport(appReport.getName(), returnPartialResult)) { +federationAM.put(appReport.getApplicationId(), appReport); + } +} + +return GetApplicationsResponse.newInstance(federationAM.values()); + } + + private static ApplicationReport mergeUAMWithUAM(ApplicationReport uam1, + ApplicationReport uam2){ +uam1.setName(PARTIAL_REPORT + uam1.getApplicationId()); +mergeAMWithUAM(uam1, uam2); +return uam1; + } + + private static void mergeAMWithUAM(ApplicationReport am, + ApplicationReport uam){ +ApplicationResourceUsageReport amResourceReport = +am.getApplicationResourceUsageReport(); + +ApplicationResourceUsageReport uamResourceReport = +uam.getApplicationResourceUsageReport(); + +amResourceReport.setNumUsedContainers( +amResourceReport.getNumUsedContainers() + +uamResourceReport.getNumUsedContainers()); + +amResourceReport.setNumReservedContainers( +amResourceReport.getNumReservedContainers() + +uamResourceReport.getNumReservedContainers()); + +amResourceReport.setUsedResources(Resources.add( +amResourceReport.getUsedResources(), +uamResourceReport.getUsedResources())); + +amResourceReport.setReservedResources(Resources.add( +amResourceReport.getReservedResources(), +uamResourceReport.getReservedResources())); + +amResourceReport.setNeededResources(Resources.add( +amResourceReport.getNeededResources(), +uamResourceReport.getNeededResources())); + +amResourceReport.setMemorySeconds( +amResourceReport.getMemorySeconds() + +uamResourceReport.getMemorySeconds()); + +amResourceReport.setVcoreSeconds( +amResourceReport.getVcoreSeconds() + +uamResourceReport.getVcoreSeconds()); + +amResourceReport.setQueueUsagePercentage( +amResourceReport.getQueueUsagePercentage() + +uamResourceReport.getQueueUsagePercentage()); + +amResourceReport.setClusterUsagePercentage( +amResourceReport.getClusterUsagePercentage() + +uamResourceReport.getClusterUsagePercentage()); + +am.setApplicationResourceUsageReport(amResourceReport); +am.getApplicationTags().addAll(uam.getApplicationTags()); Review comment: @bibinchundatt: This was taken care in previous commit already. The comment seems
[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor
[ https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=631959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631959 ] ASF GitHub Bot logged work on HADOOP-17825: --- Author: ASF GitHub Bot Created on: 31/Jul/21 11:23 Start Date: 31/Jul/21 11:23 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3250: URL: https://github.com/apache/hadoop/pull/3250#issuecomment-89085 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 631959) Time Spent: 0.5h (was: 20m) > Add BuiltInGzipCompressor > - > > Key: HADOOP-17825 > URL: https://issues.apache.org/jira/browse/HADOOP-17825 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is > not loaded. So, without Hadoop native codec installed, saving SequenceFile > using GzipCodec will throw exception like "SequenceFile doesn't work with > GzipCodec without native-hadoop code!" > Same as other codecs which we migrated to using prepared packages (lz4, > snappy), it will be better if we support GzipCodec generally without Hadoop > native codec installed. Similar to BuiltInGzipDecompressor, we can use Java > Deflater to support BuiltInGzipCompressor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor
hadoop-yetus commented on pull request #3250: URL: https://github.com/apache/hadoop/pull/3250#issuecomment-89085 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits
virajjasani commented on a change in pull request #3235: URL: https://github.com/apache/hadoop/pull/3235#discussion_r680314511 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java ## @@ -423,21 +423,22 @@ void triggerActiveLogRoll() { try { future = rollEditsRpcExecutor.submit(getNameNodeProxy()); future.get(rollEditsTimeoutMs, TimeUnit.MILLISECONDS); - lastRollTimeMs = monotonicNow(); + resetLastRollTimeMs(); lastRollTriggerTxId = lastLoadedTxnId; -} catch (ExecutionException e) { +} catch (ExecutionException | InterruptedException e) { LOG.warn("Unable to trigger a roll of the active NN", e); } catch (TimeoutException e) { - if (future != null) { -future.cancel(true); - } + future.cancel(true); Review comment: Because future will never be null here. The only way we can reach here is by catching `TimeoutException` and `TimeoutException` can only occur here because of `future.get(rollEditsTimeoutMs, TimeUnit.MILLISECONDS)`, hence we don't need null-check. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java ## @@ -452,26 +472,20 @@ public void testStandbyTriggersLogRollsWhenTailInProgressEdits() private static void waitForStandbyToCatchUpWithInProgressEdits( final NameNode standby, final long activeTxId, int maxWaitSec) throws Exception { -GenericTestUtils.waitFor(new Supplier() { - @Override - public Boolean get() { -long standbyTxId = standby.getNamesystem().getFSImage() -.getLastAppliedTxId(); -return (standbyTxId >= activeTxId); - } -}, 100, maxWaitSec * 1000); +GenericTestUtils.waitFor(() -> { + long standbyTxId = standby.getNamesystem().getFSImage() + .getLastAppliedTxId(); + return (standbyTxId >= activeTxId); +}, 100, TimeUnit.SECONDS.toMillis(maxWaitSec)); } private static void checkForLogRoll(final NameNode active, final long origTxId, int maxWaitSec) throws Exception { -GenericTestUtils.waitFor(new Supplier() { - @Override - public Boolean get() { -long curSegmentTxId = active.getNamesystem().getFSImage().getEditLog() -.getCurSegmentTxId(); -return (origTxId != curSegmentTxId); - } -}, 100, maxWaitSec * 1000); +GenericTestUtils.waitFor(() -> { + long curSegmentTxId = active.getNamesystem().getFSImage().getEditLog() + .getCurSegmentTxId(); + return (origTxId != curSegmentTxId); +}, 500, TimeUnit.SECONDS.toMillis(maxWaitSec)); Review comment: I think checking above condition every 100ms is too aggressive, keeping it to 500ms is less aggressive and quite enough for both of our tests: a) timeout during verification b) successful verification of Standby NN's txnId. However, now that we are going to add Timer implementation, it's better to keep it as is. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor
[ https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=631950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631950 ] ASF GitHub Bot logged work on HADOOP-17825: --- Author: ASF GitHub Bot Created on: 31/Jul/21 07:50 Start Date: 31/Jul/21 07:50 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3250: URL: https://github.com/apache/hadoop/pull/3250#issuecomment-890307657 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 6s | | trunk passed | | +1 :green_heart: | compile | 21m 9s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 43s | | the patch passed | | +1 :green_heart: | compile | 19m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 145 new + 332 unchanged - 0 fixed = 477 total (was 332) | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 38s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 20m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 26m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 192m 48s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Dead store to beforeDeflate in org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor.compress(byte[], int, int) At BuiltInGzipCompressor.java:org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor.compress(byte[], int, int) At BuiltInGzipCompressor.java:[line 128] | | | Return value of java.util.zip.Deflater.finished() ignored, but method has no side effect At BuiltInGzipCompressor.java:but method has no side effect At BuiltInGzipCompressor.java:[line 166] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3250 | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor
hadoop-yetus commented on pull request #3250: URL: https://github.com/apache/hadoop/pull/3250#issuecomment-890307657 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 6s | | trunk passed | | +1 :green_heart: | compile | 21m 9s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 43s | | the patch passed | | +1 :green_heart: | compile | 19m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 145 new + 332 unchanged - 0 fixed = 477 total (was 332) | | +1 :green_heart: | mvnsite | 1m 32s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 38s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 20m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 26m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 192m 48s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Dead store to beforeDeflate in org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor.compress(byte[], int, int) At BuiltInGzipCompressor.java:org.apache.hadoop.io.compress.zlib.BuiltInGzipCompressor.compress(byte[], int, int) At BuiltInGzipCompressor.java:[line 128] | | | Return value of java.util.zip.Deflater.finished() ignored, but method has no side effect At BuiltInGzipCompressor.java:but method has no side effect At BuiltInGzipCompressor.java:[line 166] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3250/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3250 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1e3f07a4c10c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 28dd31070f428f15b29a1f1addc4903771b4000c | | Default Java | Private
[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390915#comment-17390915 ] Hadoop QA commented on HADOOP-12670: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 28s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HADOOP-17800 Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 9s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 59s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 31s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 39s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 24m 6s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 45s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 2s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 2s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 32s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 32s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green}{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 85 unchanged - 2 fixed = 85 total (was 87) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 32s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} the patch passed with JDK