[GitHub] [hadoop] mukund-thakur commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
mukund-thakur commented on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-593810970 Thanks @steveloughran. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049957#comment-17049957 ] Fengnan Li edited comment on HADOOP-16828 at 3/3/20 7:00 AM: - [~xyao] Thanks very much for the review! Really appreciate it. In fact, after the initial patch I found the bug of using delTokenSeqCounter.getCount() since it maintains a SharedCount in ZK which will be changed by other secret managers, so I replaced it with a constant value. I also changed the logic a little bit so that we are competing for the starting of the range and then counting up v.s. competing for the upper limit and get the range start since the former is more intuitive to understand. Uploaded [^HADOOP-16828.002.patch] addressing your comments as well. The holes are expected as a tradeoff to this strategy. Many account registration services are adopting this for way faster id generation. was (Author: fengnanli): [~xyao] Thanks very much for the review! Really appreciate it. In fact, after the initial patch I found the bug of using delTokenSeqCounter.getCount() since it maintains a SharedCount in ZK which will be changed by other secret managers, so I replaced it with a constant value. I also changed the logic a little bit so that we are competing for the starting of the range and then counting up. v.s. competing for the upper limit and get the range start since the former is more intuitive to understand. Uploaded [^HADOOP-16828.002.patch] addressing your comments as well. The holes of expected as a tradeoff to this strategy. Many account registration services are adopting this for way faster id generation. > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049957#comment-17049957 ] Fengnan Li commented on HADOOP-16828: - [~xyao] Thanks very much for the review! Really appreciate it. In fact, after the initial patch I found the bug of using delTokenSeqCounter.getCount() since it maintains a SharedCount in ZK which will be changed by other secret managers, so I replaced it with a constant value. I also changed the logic a little bit so that we are competing for the starting of the range and then counting up. v.s. competing for the upper limit and get the range start since the former is more intuitive to understand. Uploaded [^HADOOP-16828.002.patch] addressing your comments as well. The holes of expected as a tradeoff to this strategy. Many account registration services are adopting this for way faster id generation. > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-16828: Attachment: HADOOP-16828.002.patch > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16828.001.patch, HADOOP-16828.002.patch, Screen > Shot 2020-01-25 at 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, > Screen Shot 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049937#comment-17049937 ] Bilahari T H commented on HADOOP-16890: --- Driver test results using a Namespace enabled account in Central India: With client-credentials Account without namespace support {noformat} [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance:440->Assert.assertTrue:41->Assert.fail:88 Performance of version 2 is not acceptable: v1ElapsedMs=1389, v2ElapsedMs=2214, ratio=1.59 [INFO] [ERROR] Tests run: 415, Failures: 1, Errors: 0, Skipped: 227 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24 {noformat} Account with namespace support {noformat} [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 415, Failures: 0, Errors: 0, Skipped: 37 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24{noformat} With MSI Account without namespace support {noformat} [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance:440->Assert.assertTrue:41->Assert.fail:88 Performance of version 2 is not acceptable: v1ElapsedMs=1689, v2ElapsedMs=2456, ratio=1.45 [INFO] [ERROR] Tests run: 415, Failures: 1, Errors: 0, Skipped: 226 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24 {noformat} Account with namespace support {noformat} [INFO] Tests run: 53, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 415, Failures: 0, Errors: 0, Skipped: 36 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24{noformat} > ABFS: Change in expiry calculation for MSI token provider > - > > Key: HADOOP-16890 > URL: https://issues.apache.org/jira/browse/HADOOP-16890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > > Set token expiry time as the value of expires_on field from the MSI response > in case it is present -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16840) AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket
[ https://issues.apache.org/jira/browse/HADOOP-16840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049907#comment-17049907 ] Weiwei Yang commented on HADOOP-16840: -- Hi [~wujinhu] Sorry for the late response... I was quite busy with some other work. For the patch here, I don't know the internal things for OSS. But the code you added might be running to endless loop... I think at least you need a timeout to break the loop. What do you think? > AliyunOSS: getFileStatus throws FileNotFoundException in versioning bucket > -- > > Key: HADOOP-16840 > URL: https://issues.apache.org/jira/browse/HADOOP-16840 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 2.10.0, 2.9.2, 3.0.3, 3.2.1, 3.1.3 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-16840.001.patch, HADOOP-16840.002.patch > > > When hadoop lists object in versioning bucket with many delete marker in it, > OSS will return > {code:java} > > > select-us-east-1 > test/hadoop/file/ > > 100 > / > true > test/hadoop/file/sub2 > > {code} > It sets *IsTruncated* to true and without *ObjectSummaries* or > *CommonPrefixes*, and will throw FileNotFoundException > {code:java} > // code placeholder > java.io.FileNotFoundException: oss://select-us-east-1/test/hadoop/file: No > such file or directory!java.io.FileNotFoundException: > oss://select-us-east-1/test/hadoop/file: No such file or directory! > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.getFileStatus(AliyunOSSFileSystem.java:281) > at > org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.testGetFileStatusInVersioningBucket{code} > > {code:java} > // code placeholder > ObjectListing listing = store.listObjects(key, 1, null, false); > if (CollectionUtils.isNotEmpty(listing.getObjectSummaries()) || > CollectionUtils.isNotEmpty(listing.getCommonPrefixes())) { > return new OSSFileStatus(0, true, 1, 0, 0, qualifiedPath, username); > } else { > throw new FileNotFoundException(path + ": No such file or directory!"); > } > {code} > In this case, we should call listObjects until *IsTruncated* is false or > *ObjectSummaries* is not empty or *CommonPrefixes* is not empty. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity.
invincible-akshay commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity. URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593759072 Should we also update the following: > With this policy, the replicas of a file do not evenly distribute across the racks. - file -> block And for the previous discussion I'm considering the statement as follows: > Two replicas are on different nodes of one rack and the remaining replica is on a node of one of the other racks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity.
aajisaka commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity. URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593753554 Thanks. > Or is it appropriate to update the code in same branch and let the PR get updated automatically? This is my first time so not very sure about the conventions. You can add commits in the same branch and let the PR get updated automatically :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16899) Update HdfsDesign.md to reduce ambiguity
[ https://issues.apache.org/jira/browse/HADOOP-16899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049868#comment-17049868 ] Akshay Nehe commented on HADOOP-16899: -- Based on comment by [~aajisaka] on the PR, considering replacing the line with: _Two replicas are on one rack, and the remaining replica is on one of the other racks._ instead of previously indicated update. An additional recommended update will be included for the sentence 2 sentences before the previously mentioned. Quoting Akira: The following sentence is not directly related to your PR, however, it can be fixed at the same time. {quote}However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. {quote} * it does reduce -> it does not reduce If a block is placed in three unique racks, the probability of rack-local read will increase and the network bandwidth will be reduced when reading the data. Therefore I think 'does' should be changed to 'does not'. Once decided to make both changes, will update the Jira Description. > Update HdfsDesign.md to reduce ambiguity > > > Key: HADOOP-16899 > URL: https://issues.apache.org/jira/browse/HADOOP-16899 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Akshay Nehe >Priority: Minor > > A proposed update to > [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md|http://example.com/] > in the section "Replica Placement: The First Baby Steps" 4th paragraph, 2nd > last line. > The sentence is leading to ambiguity of reader. > Considering the statement segmented in 3 parts by the commas: > # the first part talks about "one thirds of replicas"; > # the second part talks about "two thirds of replicas" > # the third part talking about "the other third" is leading to ambiguity > when one thirds and two thirds have already accounted for the whole. > Proposed solution: > Getting rid of the third part or rephrasing entire sentence to capture the > overall essence of the sentence. > In other words, replacing > _One third of replicas are on one node, two thirds of replicas are on one > rack, and the other third are evenly distributed across the remaining racks._ > with > _One third of replicas are on one node, two thirds of replicas are on one > rack._ > Please suggest if any additional meaning is getting lost with this > replacement. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity.
invincible-akshay commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity. URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593747194 Hi @aajisaka , thank you for your feedback. I agree with you, talk about fractions made me think about multiple blocks of file. I will replace the sentence with the one you suggested, it will make it clear. I am happy to include the 2nd recommended update as well. I will update and raise the PR again. Or is it appropriate to update the code in same branch and let the PR get updated automatically? This is my first time so not very sure about the conventions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity.
aajisaka commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity. URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593742741 Thank you for your contribution. The sentence seems still ambiguous to me > One third of replicas are on one node, two thirds of replicas are on one rack. -> Two replicas are on one rack, and the remaining replica is on one of the other racks. - The replication factor is 3 in this sentence, so 'one' seems clearer than 'one third'. - This sentence should tell that a replica is on a rack (instead of node) and the other two replicas are on one of 'the other' racks. The following sentence is not directly related to your PR, however, it can be fixed at the same time. > However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. * it does reduce -> it does not reduce If a block is placed in three unique racks, the probability of rack-local read will increase and the network bandwidth will be reduced when reading the data. Therefore I think 'does' should be changed to 'does not'. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukul1987 commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
mukul1987 commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#discussion_r386768106 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java ## @@ -541,8 +541,8 @@ public void clearSnapshottableDirs() { * * @return maximum allowable snapshot ID. */ - public int getMaxSnapshotID() { -return ((1 << SNAPSHOT_ID_BIT_WIDTH) - 1); + public int getMaxSnapshotID() { +return (1 << SNAPSHOT_ID_BIT_WIDTH); Review comment: I think we can still leave this as max index -1, as we would only have one snapshot id here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16898) Batch listing of multiple directories to be an unstable interface
[ https://issues.apache.org/jira/browse/HADOOP-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049829#comment-17049829 ] Wei-Chiu Chuang commented on HADOOP-16898: -- [~csun] fyi > Batch listing of multiple directories to be an unstable interface > - > > Key: HADOOP-16898 > URL: https://issues.apache.org/jira/browse/HADOOP-16898 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > HDFS-13616 added a new API for batch listing of multiple directories, but > it isn't yet ready for tagging as stable & doesn't suit object stores. > * the new API is pulled into a new interface marked unstable; > * new classes (PartialListing) also tagged unstable. > * Define a new path capability. > HDFS will implement, but not filter/HarFS; it is an HDFS exclusive > implementation for now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
jojochuang commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#issuecomment-593719088 I've gone through all the usage of the snapshot id. The only concern i had was bitwise operations on the snapshot id, but i didn't find any. Widening the allowed range shouldn't be a problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
jojochuang commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#discussion_r386750242 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java ## @@ -92,4 +92,15 @@ public void testSnapshotLimits() throws Exception { StringUtils.toLowerCase(se.getMessage()).contains("rollover")); } } + + + @Test + public void testValidateSnapshotIDWidth() { +FSDirectory fsdir = mock(FSDirectory.class); +SnapshotManager snapshotManager = new SnapshotManager(new Configuration(), +fsdir); +Assert.assertTrue(snapshotManager. Review comment: can you add a description/comment to explain this assertion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
jojochuang commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#issuecomment-593718653 @karthikhw can you double check the failed tests? especially the snapshot tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
hadoop-yetus commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#issuecomment-593714119 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 53s | trunk passed | | +1 :green_heart: | compile | 1m 9s | trunk passed | | +1 :green_heart: | checkstyle | 0m 44s | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | trunk passed | | +1 :green_heart: | shadedclient | 16m 3s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 46s | trunk passed | | +0 :ok: | spotbugs | 2m 49s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 48s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | checkstyle | 0m 38s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) | | +1 :green_heart: | mvnsite | 1m 7s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 39s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 44s | the patch passed | | +1 :green_heart: | findbugs | 2m 54s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 92m 36s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | The patch does not generate ASF License warnings. | | | | 159m 57s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.TestEncryptionZones | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1870 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 828dfedc29e0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/2/testReport/ | | Max. process+thread count | 4480 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus commented on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-593699941 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 18m 46s | trunk passed | | +1 :green_heart: | compile | 17m 0s | trunk passed | | +1 :green_heart: | checkstyle | 2m 37s | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | trunk passed | | +1 :green_heart: | shadedclient | 19m 40s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 13s | trunk passed | | +0 :ok: | spotbugs | 1m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 33s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 41s | the patch passed | | +1 :green_heart: | compile | 16m 14s | the patch passed | | +1 :green_heart: | javac | 16m 14s | the patch passed | | -0 :warning: | checkstyle | 2m 36s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 15s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 16s | the patch passed | | +0 :ok: | findbugs | 0m 30s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 25s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 33s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | The patch does not generate ASF License warnings. | | | | 106m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9795022f8637 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/testReport/ | | Max. process+thread count | 452 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus commented on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-593691330 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 12s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 18s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 15m 57s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | trunk passed | | +0 :ok: | spotbugs | 0m 49s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 48s | trunk passed | | -0 :warning: | patch | 1m 5s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | the patch passed | | +1 :green_heart: | compile | 0m 21s | the patch passed | | +1 :green_heart: | javac | 0m 21s | the patch passed | | -0 :warning: | checkstyle | 0m 14s | hadoop-tools/hadoop-azure: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 17s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 19s | the patch passed | | +1 :green_heart: | findbugs | 0m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 61m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 6e61497f53c4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #1873: YARN-7307. Allow client/AM update supported resource types via YARN APIs. (Sunil G via wangda)
jojochuang opened a new pull request #1873: YARN-7307. Allow client/AM update supported resource types via YARN APIs. (Sunil G via wangda) URL: https://github.com/apache/hadoop/pull/1873 (cherry picked from commit 170b6a48c4221d85faa0a53c4d632c5a04a2569c) Change-Id: I0e3190a4a674ceb03be76b48e9b5c9e692069eed Conflicts: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
bilaharith opened a new pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] karthikhw commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
karthikhw commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#issuecomment-593652779 Submitted new PR with the changes requested. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-16900: -- Description: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal, and the multipart upload is allowed to be marked as completed. (was: If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal.) > Very large files can be truncated when written through S3AFileSystem > > > Key: HADOOP-16900 > URL: https://issues.apache.org/jira/browse/HADOOP-16900 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Andrew Olson >Priority: Major > Labels: s3 > > If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt > truncation of the S3 object will occur as the maximum number of parts in a > multipart upload is 10,000 as specific by the S3 API and there is an apparent > bug where this failure is not fatal, and the multipart upload is allowed to > be marked as completed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16900) Very large files can be truncated when written through S3AFileSystem
Andrew Olson created HADOOP-16900: - Summary: Very large files can be truncated when written through S3AFileSystem Key: HADOOP-16900 URL: https://issues.apache.org/jira/browse/HADOOP-16900 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Reporter: Andrew Olson If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] karthikhw commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
karthikhw commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#discussion_r386677308 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java ## @@ -96,7 +96,7 @@ private final boolean snapshotDiffAllowSnapRootDescendant; private final AtomicInteger numSnapshots = new AtomicInteger(); - private static final int SNAPSHOT_ID_BIT_WIDTH = 24; + private static final int SNAPSHOT_ID_BIT_WIDTH = 31; Review comment: Sure @arp7 I will set to 28 then. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity.
hadoop-yetus commented on issue #1871: Hadoop-16899. Update HdfsDesign.md to reduce ambiguity. URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593644340 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 20m 57s | trunk passed | | +1 :green_heart: | mvnsite | 1m 13s | trunk passed | | +1 :green_heart: | shadedclient | 37m 36s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 8s | the patch passed | | +1 :green_heart: | mvnsite | 1m 8s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 3s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 57m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1871/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1871 | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux 01eccb3eecf9 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Max. process+thread count | 344 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1871/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
hadoop-yetus commented on issue #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#issuecomment-593644106 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 26s | trunk passed | | +1 :green_heart: | compile | 1m 8s | trunk passed | | +1 :green_heart: | checkstyle | 0m 45s | trunk passed | | +1 :green_heart: | mvnsite | 1m 15s | trunk passed | | +1 :green_heart: | shadedclient | 16m 3s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 47s | trunk passed | | +0 :ok: | spotbugs | 2m 50s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 48s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | checkstyle | 0m 37s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) | | +1 :green_heart: | mvnsite | 1m 8s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 58s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | | +1 :green_heart: | findbugs | 2m 58s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 80m 30s | hadoop-hdfs in the patch passed. | | +0 :ok: | asflicense | 0m 40s | ASF License check generated no output? | | | | 146m 50s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots | | | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | | | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.server.namenode.TestLargeDirectoryDelete | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestNestedEncryptionZones | | | hadoop.hdfs.server.namenode.TestFSImage | | | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile | | | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1870 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 45d4fd00ddea 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/1/testReport/ | | Max. process+thread count | 4152 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1870/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] hadoop-yetus commented on issue #1839: HADOOP-16848. Refactoring: initial layering
hadoop-yetus commented on issue #1839: HADOOP-16848. Refactoring: initial layering URL: https://github.com/apache/hadoop/pull/1839#issuecomment-593629160 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 39s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 10s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 22s | trunk passed | | +1 :green_heart: | compile | 16m 49s | trunk passed | | +1 :green_heart: | checkstyle | 2m 37s | trunk passed | | +1 :green_heart: | mvnsite | 2m 18s | trunk passed | | +1 :green_heart: | shadedclient | 20m 47s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 45s | trunk passed | | +0 :ok: | spotbugs | 1m 10s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 12s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 19s | hadoop-aws in the patch failed. | | -1 :x: | compile | 15m 25s | root in the patch failed. | | -1 :x: | javac | 15m 25s | root in the patch failed. | | -0 :warning: | checkstyle | 2m 40s | root: The patch generated 42 new + 31 unchanged - 0 fixed = 73 total (was 31) | | -1 :x: | mvnsite | 0m 35s | hadoop-aws in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 47s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 34s | the patch passed | | -1 :x: | findbugs | 0m 35s | hadoop-aws in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 9m 36s | hadoop-common in the patch passed. | | -1 :x: | unit | 0m 33s | hadoop-aws in the patch failed. | | +1 :green_heart: | asflicense | 0m 50s | The patch does not generate ASF License warnings. | | | | 145m 32s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.shell.TestCopy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1839 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 395a70bf17f6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/diff-checkstyle-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/testReport/ | | Max. process+thread count | 1535 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1839/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation
hadoop-yetus commented on issue #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation URL: https://github.com/apache/hadoop/pull/1820#issuecomment-593628155 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 0s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 12 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 14s | trunk passed | | +1 :green_heart: | compile | 24m 31s | trunk passed | | +1 :green_heart: | checkstyle | 3m 32s | trunk passed | | +1 :green_heart: | mvnsite | 2m 45s | trunk passed | | +1 :green_heart: | shadedclient | 26m 15s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 53s | trunk passed | | +0 :ok: | spotbugs | 1m 21s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 2s | trunk passed | | -0 :warning: | patch | 1m 50s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 49s | the patch passed | | +1 :green_heart: | compile | 19m 43s | the patch passed | | +1 :green_heart: | javac | 19m 43s | the patch passed | | -0 :warning: | checkstyle | 2m 56s | root: The patch generated 52 new + 98 unchanged - 19 fixed = 150 total (was 117) | | +1 :green_heart: | mvnsite | 2m 33s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 26s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 33s | the patch passed | | -1 :x: | findbugs | 1m 18s | hadoop-tools/hadoop-aws generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) | ||| _ Other Tests _ | | -1 :x: | unit | 9m 32s | hadoop-common in the patch passed. | | -1 :x: | unit | 1m 31s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 151m 47s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.policySetCount in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.inputPolicySet(int) At S3AInstrumentation.java:[line 818] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readExceptions in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readException() At S3AInstrumentation.java:[line 755] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readFullyOperationStarted(long, long) At S3AInstrumentation.java:[line 788] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readsIncomplete in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationCompleted(int, int) At S3AInstrumentation.java:[line 799] | | | Increment of volatile field org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperations in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:in org.apache.hadoop.fs.s3a.S3AInstrumentation$InputStreamStatisticsImpl.readOperationStarted(long, long) At S3AInstrumentation.java:[line 777] | | | Increment of volatile field
[GitHub] [hadoop] invincible-akshay commented on issue #1871: Update HdfsDesign.md
invincible-akshay commented on issue #1871: Update HdfsDesign.md URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593617851 Re-opened on creating JIRA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay opened a new pull request #1871: Update HdfsDesign.md
invincible-akshay opened a new pull request #1871: Update HdfsDesign.md URL: https://github.com/apache/hadoop/pull/1871 Proposed change is in 2nd last sentence of the affected paragraph. Considering the statement segmented in 3 parts by the commas: 1. the first part talks about "one thirds of replicas"; 2. the second part talks about "two thirds of replicas" 3. the third part talking about "the other third" is leading to ambiguity when one thirds and two thirds have already accounted for the whole. Possible solution is to either get rid of the third part or rephrase entire sentence to capture the overall essence of the sentence. Please suggest. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16899) Update HdfsDesign.md to reduce ambiguity
[ https://issues.apache.org/jira/browse/HADOOP-16899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akshay Nehe updated HADOOP-16899: - Description: A proposed update to [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md|http://example.com/] in the section "Replica Placement: The First Baby Steps" 4th paragraph, 2nd last line. The sentence is leading to ambiguity of reader. Considering the statement segmented in 3 parts by the commas: # the first part talks about "one thirds of replicas"; # the second part talks about "two thirds of replicas" # the third part talking about "the other third" is leading to ambiguity when one thirds and two thirds have already accounted for the whole. Proposed solution: Getting rid of the third part or rephrasing entire sentence to capture the overall essence of the sentence. In other words, replacing _One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks._ with _One third of replicas are on one node, two thirds of replicas are on one rack._ Please suggest if any additional meaning is getting lost with this replacement. was: A proposed update to [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md|http://example.com] in the section "Replica Placement: The First Baby Steps" 4th paragraph, 2nd last line. The sentence is leading to ambiguity of reader. Proposed solution: Getting rid of the third part of sentence (after 2nd comma) or rephrase entire sentence to capture the overall essence of the sentence. In other words, replacing _One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks._ with _One third of replicas are on one node, two thirds of replicas are on one rack._ Please suggest if any additional meaning is getting lost with this replacement. > Update HdfsDesign.md to reduce ambiguity > > > Key: HADOOP-16899 > URL: https://issues.apache.org/jira/browse/HADOOP-16899 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Akshay Nehe >Priority: Minor > > A proposed update to > [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md|http://example.com/] > in the section "Replica Placement: The First Baby Steps" 4th paragraph, 2nd > last line. > The sentence is leading to ambiguity of reader. > Considering the statement segmented in 3 parts by the commas: > # the first part talks about "one thirds of replicas"; > # the second part talks about "two thirds of replicas" > # the third part talking about "the other third" is leading to ambiguity > when one thirds and two thirds have already accounted for the whole. > Proposed solution: > Getting rid of the third part or rephrasing entire sentence to capture the > overall essence of the sentence. > In other words, replacing > _One third of replicas are on one node, two thirds of replicas are on one > rack, and the other third are evenly distributed across the remaining racks._ > with > _One third of replicas are on one node, two thirds of replicas are on one > rack._ > Please suggest if any additional meaning is getting lost with this > replacement. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16899) Update HdfsDesign.md to reduce ambiguity
Akshay Nehe created HADOOP-16899: Summary: Update HdfsDesign.md to reduce ambiguity Key: HADOOP-16899 URL: https://issues.apache.org/jira/browse/HADOOP-16899 Project: Hadoop Common Issue Type: Improvement Components: documentation Reporter: Akshay Nehe A proposed update to [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md|http://example.com] in the section "Replica Placement: The First Baby Steps" 4th paragraph, 2nd last line. The sentence is leading to ambiguity of reader. Proposed solution: Getting rid of the third part of sentence (after 2nd comma) or rephrase entire sentence to capture the overall essence of the sentence. In other words, replacing _One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks._ with _One third of replicas are on one node, two thirds of replicas are on one rack._ Please suggest if any additional meaning is getting lost with this replacement. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay commented on issue #1871: Update HdfsDesign.md
invincible-akshay commented on issue #1871: Update HdfsDesign.md URL: https://github.com/apache/hadoop/pull/1871#issuecomment-593597372 Closing because the procedure to raise a JIRA before pull request wasn't followed, will raise again with right steps. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay closed pull request #1871: Update HdfsDesign.md
invincible-akshay closed pull request #1871: Update HdfsDesign.md URL: https://github.com/apache/hadoop/pull/1871 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] arp7 commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
arp7 commented on a change in pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870#discussion_r386626461 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java ## @@ -96,7 +96,7 @@ private final boolean snapshotDiffAllowSnapRootDescendant; private final AtomicInteger numSnapshots = new AtomicInteger(); - private static final int SNAPSHOT_ID_BIT_WIDTH = 24; + private static final int SNAPSHOT_ID_BIT_WIDTH = 31; Review comment: I believe @szetszwo recommended setting this to 28 bits for now (something less than 31). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] invincible-akshay opened a new pull request #1871: Update HdfsDesign.md
invincible-akshay opened a new pull request #1871: Update HdfsDesign.md URL: https://github.com/apache/hadoop/pull/1871 Proposed change is in 2nd last sentence of the affected paragraph. Considering the statement segmented in 3 parts by the commas: 1. the first part talks about "one thirds of replicas"; 2. the second part talks about "two thirds of replicas" 3. the third part talking about "the other third" is leading to ambiguity when one thirds and two thirds have already accounted for the whole. Possible solution is to either get rid of the third part or rephrase entire sentence to capture the overall essence of the sentence. Please suggest. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049610#comment-17049610 ] Hadoop QA commented on HADOOP-16890: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 56s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | | {color:orange}-0{color} | {color:orange} patch {color} | {color:orange} 1m 18s{color} | {color:orange} Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 0s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 0s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 33s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 16s{color} | {color:blue} hadoop-project has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 10s{color} | {color:black} {color} | \\ \\ || Subsystem ||
[GitHub] [hadoop] hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider
hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1866#issuecomment-593593848 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 3s | trunk passed | | +1 :green_heart: | compile | 17m 29s | trunk passed | | +1 :green_heart: | checkstyle | 2m 44s | trunk passed | | +1 :green_heart: | mvnsite | 1m 3s | trunk passed | | +1 :green_heart: | shadedclient | 20m 37s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | trunk passed | | +0 :ok: | spotbugs | 0m 56s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 24s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | -0 :warning: | patch | 1m 18s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 37s | the patch passed | | -1 :x: | compile | 10m 0s | root in the patch failed. | | -1 :x: | javac | 10m 0s | root in the patch failed. | | -0 :warning: | checkstyle | 2m 33s | root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 46s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 4s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | | +0 :ok: | findbugs | 0m 16s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 14s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 10s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 99m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 316aeeb07aab 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/testReport/ | | Max. process+thread count | 357 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus commented on issue #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface
hadoop-yetus commented on issue #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface URL: https://github.com/apache/hadoop/pull/1869#issuecomment-593584429 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 24m 59s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 10s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 8s | trunk passed | | +1 :green_heart: | compile | 16m 59s | trunk passed | | +1 :green_heart: | checkstyle | 2m 46s | trunk passed | | +1 :green_heart: | mvnsite | 4m 1s | trunk passed | | +1 :green_heart: | shadedclient | 22m 12s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 46s | trunk passed | | +0 :ok: | spotbugs | 3m 4s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 35s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 45s | the patch passed | | +1 :green_heart: | compile | 16m 10s | the patch passed | | +1 :green_heart: | javac | 16m 10s | the patch passed | | -0 :warning: | checkstyle | 2m 47s | root: The patch generated 7 new + 282 unchanged - 11 fixed = 289 total (was 293) | | +1 :green_heart: | mvnsite | 3m 58s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 26s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 47s | the patch passed | | +1 :green_heart: | findbugs | 8m 4s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 9m 13s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 19s | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 94m 48s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 4s | The patch does not generate ASF License warnings. | | | | 258m 51s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.shell.TestCopy | | | hadoop.hdfs.TestEncryptionZonesWithKMS | | | hadoop.hdfs.TestEncryptionZones | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1869 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 90ca5bfe1b4d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e9eeced | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/testReport/ | | Max. process+thread count | 4405 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1869/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To
[jira] [Commented] (HADOOP-16828) Zookeeper Delegation Token Manager fetch sequence number by batch
[ https://issues.apache.org/jira/browse/HADOOP-16828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049589#comment-17049589 ] Xiaoyu Yao commented on HADOOP-16828: - Thanks [~fengnanli] for reporting the issue and provide the patch. The patch LGTM overall. The performance improvement is impressive. Here are a few minor comments. ZKDelegationTokenSecretManager.java Line:100 NIT: can we add a token as part of the prefix for the new key? i.e. "token.seqnum.batch.size" Line 559: getDelegationTokenSeqNum() this function needs to be changed as the delTokenSeqCounter.getCount() will be updated in batch. We should return currentSeqNum here instead. TestZKDelegationTokenSecretManager.java As shown in the test, if the batch size is large, say 1000, this might leave holes in the sequence number when KMS failover. It might be an acceptable tradeoff. Please ensure the DTSM instances (tm1, tm2) are properly destroyed after the test by calling verifyDestroy(). > Zookeeper Delegation Token Manager fetch sequence number by batch > - > > Key: HADOOP-16828 > URL: https://issues.apache.org/jira/browse/HADOOP-16828 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-16828.001.patch, Screen Shot 2020-01-25 at > 2.25.06 PM.png, Screen Shot 2020-01-25 at 2.25.16 PM.png, Screen Shot > 2020-01-25 at 2.25.24 PM.png > > > Currently in ZKDelegationTokenSecretManager.java the seq number is > incremented by 1 each time there is a request for creating new token. This > will need to send traffic to Zookeeper server. With multiple managers > running, there is data contention going on. Also, since the current logic of > incrementing is using tryAndSet which is optimistic concurrency control > without locking. This data contention is having performance degradation when > the secret manager are under volume of traffic. > The change here is to fetching this seq number by batch instead of 1, which > will reduce the traffic sent to ZK and make many operations inside ZK secret > manager's memory. > After putting this into production we saw huge improvement to the RPC > processing latency of get delegationtoken calls. Also, since ZK takes less > traffic in this way. Other write calls, like renew and cancel delegation > tokens are benefiting from this change. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] karthikhw opened a new pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit
karthikhw opened a new pull request #1870: HDFS-15201 SnapshotCounter hits MaxSnapshotID limit URL: https://github.com/apache/hadoop/pull/1870 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith closed pull request #1866: HADOOP-16890: Change in expiry calculation for MSI token provider
bilaharith closed pull request #1866: HADOOP-16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1866 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14936) S3Guard: remove "experimental" from documentation
[ https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049541#comment-17049541 ] Hudson commented on HADOOP-14936: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18021 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18021/]) HADOOP-14936. S3Guard: remove experimental from documentation. (github: rev edc2e9d2f138c9a87ce2f0e46169d53e0af02de7) * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md > S3Guard: remove "experimental" from documentation > - > > Key: HADOOP-14936 > URL: https://issues.apache.org/jira/browse/HADOOP-14936 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Aaron Fabbri >Assignee: Gabor Bota >Priority: Major > > I think it is time to remove the "experimental feature" designation in the > site docs for S3Guard. Discuss. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #1863: HADOOP-14936. S3Guard: remove experimental from documentation
steveloughran merged pull request #1863: HADOOP-14936. S3Guard: remove experimental from documentation URL: https://github.com/apache/hadoop/pull/1863 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049518#comment-17049518 ] Mingliang Liu commented on HADOOP-16794: I see major backport conflicts from {{trunk}} to branch-3.2 and older branches. [~mukund-thakur] Could you provide a patch for other branches? I can help review and backport. Thanks, > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Fix For: 3.3.0 > > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049508#comment-17049508 ] Hudson commented on HADOOP-16897: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18020 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18020/]) HADOOP-16897. Sort fields in ReflectionUtils.java. (github: rev 5678b19b016934fecfb16177c849668d642c9c7a) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Assignee: cpugputpu >Priority: Minor > Fix For: 3.3.0 > > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1866#issuecomment-593279025 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 23m 42s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 21s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 20s | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | trunk passed | | +1 :green_heart: | shadedclient | 15m 56s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | the patch passed | | +1 :green_heart: | compile | 0m 21s | the patch passed | | +1 :green_heart: | javac | 0m 21s | the patch passed | | -0 :warning: | checkstyle | 0m 14s | hadoop-tools/hadoop-azure: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 26s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 19s | the patch passed | | +1 :green_heart: | findbugs | 0m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 8s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 84m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 32ca33893564 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1a636da | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/3/testReport/ | | Max. process+thread count | 345 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16890: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 50s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 65dedd02252c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a43510e | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results
[jira] [Resolved] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16897. - Resolution: Fixed fixed in trunk -thanks! > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Assignee: cpugputpu >Priority: Minor > Fix For: 3.3.0 > > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16897: Fix Version/s: 3.3.0 > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Priority: Minor > Fix For: 3.3.0 > > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16897: --- Assignee: cpugputpu > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Assignee: cpugputpu >Priority: Minor > Fix For: 3.3.0 > > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16897: Component/s: test > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: test, util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Priority: Minor > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16897) Sort fields in ReflectionUtils.java
[ https://issues.apache.org/jira/browse/HADOOP-16897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16897: Affects Version/s: 3.3.0 > Sort fields in ReflectionUtils.java > --- > > Key: HADOOP-16897 > URL: https://issues.apache.org/jira/browse/HADOOP-16897 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.3.0 >Reporter: cpugputpu >Priority: Minor > > The tests in > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyCallBacks_ > and > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately_ > can fail. > java.lang.AssertionError: > Element 0 for metrics expected: {name=C1, description=C1 desc} > , value=1}> > but was: {name=G1, description=G1 desc} > , value=2}> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.apache.hadoop.test.MoreAsserts.assertEquals(MoreAsserts.java:60) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.checkMetricsRecords(TestMetricsSystemImpl.java:439) > at > org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:178) > > The root cause of this failure can be analyzed in the following stack trace: > _java.lang.Class.*getDeclaredFields*(Class.java:1916)_ > > _org.apache.hadoop.util.ReflectionUtils.getDeclaredFieldsIncludingInherited(ReflectionUtils.java:353)_ > > _org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:68)_ > > _org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)_ > > _org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)_ > > _org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyCallBacks(TestMetricsSystemImpl.java:156)_ > The specification about getDeclaredFields() says that "the elements in the > returned array are not sorted and are not in any particular order". The > documentation is here for your reference: > [https://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getDeclaredFields--] > And the behaviour might be different for different JVM versions or vendors > > The fix is to sort the fields returned by getDeclaredFields() so that the > non-deterministic behaviour can be eliminated completely. In this way, the > test becomes more stable and it will not suffer from the failure above any > more. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #1868: HADOOP-16897. Sort fields in ReflectionUtils.java
steveloughran merged pull request #1868: HADOOP-16897. Sort fields in ReflectionUtils.java URL: https://github.com/apache/hadoop/pull/1868 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049495#comment-17049495 ] Hudson commented on HADOOP-16794: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18019 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18019/]) HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS key (github: rev f864ef742960b805b430841c3a1ccb9e11bcc77c) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractTestS3AEncryption.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/test/ExtraAssertions.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/EncryptionTestUtils.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesEncryption.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSDefaultKey.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionWithDefaultS3Settings.java > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Fix For: 3.3.0 > > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1868: HADOOP-16897. Sort fields in ReflectionUtils.java
steveloughran commented on issue #1868: HADOOP-16897. Sort fields in ReflectionUtils.java URL: https://github.com/apache/hadoop/pull/1868#issuecomment-593529759 +1, committing. we all hate flaky tests. thanks for this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
steveloughran commented on a change in pull request #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#discussion_r386543844 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardFsckViolationHandler.java ## @@ -60,28 +66,55 @@ public void handle(S3GuardFsck.ComparePair comparePair) { sB.append(newLine) .append("On path: ").append(comparePair.getPath()).append(newLine); -handleComparePair(comparePair, sB); +handleComparePair(comparePair, sB, HandleMode.LOG); LOG.error(sB.toString()); } + public void doFix(S3GuardFsck.ComparePair comparePair) throws IOException { +if (!comparePair.containsViolation()) { + LOG.debug("There is no violation in the compare pair: {}", comparePair); + return; +} + +StringBuilder sB = new StringBuilder(); +sB.append(newLine) +.append("On path: ").append(comparePair.getPath()).append(newLine); + +handleComparePair(comparePair, sB, HandleMode.FIX); + +LOG.info(sB.toString()); + } + /** * Create a new instance of the violation handler for all the violations * found in the compare pair and use it. * * @param comparePair the compare pair with violations * @param sB StringBuilder to append error strings from violations. */ - protected static void handleComparePair(S3GuardFsck.ComparePair comparePair, - StringBuilder sB) { + protected void handleComparePair(S3GuardFsck.ComparePair comparePair, Review comment: javadoc change? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
hadoop-yetus removed a comment on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#issuecomment-587107658 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 26m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 59s | trunk passed | | +1 :green_heart: | compile | 0m 35s | trunk passed | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 15m 12s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed | | +1 :green_heart: | javac | 0m 29s | the patch passed | | -0 :warning: | checkstyle | 0m 20s | hadoop-tools/hadoop-aws: The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 40s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 84m 22s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1851 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 590010f072c2 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 439d935 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/1/testReport/ | | Max. process+thread count | 447 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1843: HADOOP-16794. encryption over rename/copy
steveloughran closed pull request #1843: HADOOP-16794. encryption over rename/copy URL: https://github.com/apache/hadoop/pull/1843 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1854: Hadoop 16864: TEST - NOT FOR CHECKIN
steveloughran closed pull request #1854: Hadoop 16864: TEST - NOT FOR CHECKIN URL: https://github.com/apache/hadoop/pull/1854 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16767) S3AInputStream reopening does not handle non IO exceptions properly
[ https://issues.apache.org/jira/browse/HADOOP-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049480#comment-17049480 ] Hudson commented on HADOOP-16767: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18018 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18018/]) HADOOP-16767 Handle non-IO exceptions in reopen() (github: rev e553eda9cd492ddc2b3aebe913913005e7b387c9) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java > S3AInputStream reopening does not handle non IO exceptions properly > --- > > Key: HADOOP-16767 > URL: https://issues.apache.org/jira/browse/HADOOP-16767 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: Sergei Poganshev >Assignee: Sergei Poganshev >Priority: Major > Fix For: 3.3.0 > > > Since only {{IOException}} is getting caught in > [closeStream|https://github.com/apache/hadoop/blob/24080666e5e2214d4a362c889cd9aa617be5de81/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L579], > {{reopen()}} fails in case {{SdkClientException}} occurs on a drain attempt. > This leads to multiple failing retries of {{invoker.retry("read",...)}} and > then the whole {{read()}} operation failing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049471#comment-17049471 ] Steve Loughran commented on HADOOP-16794: - OK, merged in to trunk...feel free to backport > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Fix For: 3.3.0 > > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16794: Fix Version/s: 3.3.0 > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Fix For: 3.3.0 > > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16794) S3A reverts KMS encryption to the bucket's default KMS key in rename/copy
[ https://issues.apache.org/jira/browse/HADOOP-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16794: Summary: S3A reverts KMS encryption to the bucket's default KMS key in rename/copy (was: S3 Encryption is always using default region-specific AWS-managed KMS key) > S3A reverts KMS encryption to the bucket's default KMS key in rename/copy > - > > Key: HADOOP-16794 > URL: https://issues.apache.org/jira/browse/HADOOP-16794 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > > When using (bucket-level) S3 Default Encryption with SSE-KMS and a CMK, all > files uploaded via the HDFS {{FileSystem}} {{s3a://}} scheme receive the > wrong encryption key, always falling back to the region-specific AWS-managed > KMS key for S3, instead of retaining the custom CMK. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
steveloughran merged pull request #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation URL: https://github.com/apache/hadoop/pull/1823#issuecomment-592163392 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 7 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 8s | trunk passed | | +1 :green_heart: | compile | 0m 36s | trunk passed | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | -1 :x: | shadedclient | 16m 22s | branch has errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed | | +0 :ok: | spotbugs | 1m 5s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 1s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed | | +1 :green_heart: | javac | 0m 29s | the patch passed | | -0 :warning: | checkstyle | 0m 20s | hadoop-tools/hadoop-aws: The patch generated 18 new + 17 unchanged - 3 fixed = 35 total (was 20) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 15m 37s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | | +1 :green_heart: | findbugs | 1m 5s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 61m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1823 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0ed51e100dcb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 10461e0 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/8/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/8/testReport/ | | Max. process+thread count | 455 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16767) S3AInputStream reopening does not handle non IO exceptions properly
[ https://issues.apache.org/jira/browse/HADOOP-16767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16767. - Fix Version/s: 3.3.0 Resolution: Fixed > S3AInputStream reopening does not handle non IO exceptions properly > --- > > Key: HADOOP-16767 > URL: https://issues.apache.org/jira/browse/HADOOP-16767 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: Sergei Poganshev >Assignee: Sergei Poganshev >Priority: Major > Fix For: 3.3.0 > > > Since only {{IOException}} is getting caught in > [closeStream|https://github.com/apache/hadoop/blob/24080666e5e2214d4a362c889cd9aa617be5de81/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L579], > {{reopen()}} fails in case {{SdkClientException}} occurs on a drain attempt. > This leads to multiple failing retries of {{invoker.retry("read",...)}} and > then the whole {{read()}} operation failing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1766: HADOOP-16767 Handle non-IO exceptions in reopen()
steveloughran commented on issue #1766: HADOOP-16767 Handle non-IO exceptions in reopen() URL: https://github.com/apache/hadoop/pull/1766#issuecomment-593514135 well, its a minor patch so I'll D/L and test myself, ` -Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo` So +1, merging in. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #1766: HADOOP-16767 Handle non-IO exceptions in reopen()
steveloughran merged pull request #1766: HADOOP-16767 Handle non-IO exceptions in reopen() URL: https://github.com/apache/hadoop/pull/1766 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386488058 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java ## @@ -0,0 +1,349 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; +import org.apache.hadoop.conf.Configuration; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Random; +import org.junit.Assert; +import org.junit.Test; + +import org.mockito.ArgumentCaptor; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoMoreInteractions; +import static org.mockito.Mockito.*; +/** + * Test useragent of abfs client. + * + */ +public final class TestAbfsOutputStream { Review comment: Mockito tests are always a maintenance pain because they are so brittle and sho hard to understand what is going on -for example, here I couldn't really understand any of the tests. Could you add some more detail as a comment for each test -at least to level of what each test case is looking for. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16818: Component/s: fs/azure > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386486689 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java ## @@ -0,0 +1,349 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; +import org.apache.hadoop.conf.Configuration; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Random; +import org.junit.Assert; +import org.junit.Test; + +import org.mockito.ArgumentCaptor; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoMoreInteractions; +import static org.mockito.Mockito.*; +/** + * Test useragent of abfs client. + * + */ +public final class TestAbfsOutputStream { + + private static int bufferSize = 4096; + private static int writeSize = 1000; + private static String path = "~/testpath"; + private final String globalKey = "fs.azure.configuration"; + private final String accountName1 = "account1"; + private final String accountKey1 = globalKey + "." + accountName1; + private final String accountValue1 = "one"; + + @Test + public void verifyShortWriteRequest() throws Exception { + +AbfsClient client = mock(AbfsClient.class); +AbfsRestOperation op = mock(AbfsRestOperation.class); +AbfsConfiguration abfsConf; +final Configuration conf = new Configuration(); +conf.set(accountKey1, accountValue1); +abfsConf = new AbfsConfiguration(conf, accountName1); +AbfsPerfTracker tracker = new AbfsPerfTracker("test", accountName1, abfsConf); +when(client.getAbfsPerfTracker()).thenReturn(tracker); +when(client.append(anyString(), anyLong(), any(byte[].class), anyInt(), anyInt(), anyBoolean(), anyBoolean())).thenReturn(op); + +AbfsOutputStream out = new AbfsOutputStream(client, path, 0, bufferSize, true, false, true, false); +final byte[] b = new byte[writeSize]; +new Random().nextBytes(b); +out.write(b); +out.hsync(); +ArgumentCaptor acString = ArgumentCaptor.forClass(String.class); +ArgumentCaptor acLong = ArgumentCaptor.forClass(Long.class); +ArgumentCaptor acInt = ArgumentCaptor.forClass(Integer.class); +ArgumentCaptor acBool = ArgumentCaptor.forClass(Boolean.class); +ArgumentCaptor acByteArray = ArgumentCaptor.forClass(byte[].class); + +final byte[] b1 = new byte[2*writeSize]; +new Random().nextBytes(b1); +out.write(b1); +out.flush(); +out.hflush(); + +out.hsync(); + +verify(client, times(2)).append(acString.capture(), acLong.capture(), acByteArray.capture(), acInt.capture(), acInt.capture(), acBool.capture(), acBool.capture()); +Assert.assertEquals(Arrays.asList(path, path) , acString.getAllValues()); +Assert.assertEquals(Arrays.asList(Long.valueOf(0), Long.valueOf(writeSize)), acLong.getAllValues()); +//flush=true, close=false, flush=true, close=false +Assert.assertEquals(Arrays.asList(true, false, true, false), acBool.getAllValues()); +Assert.assertEquals(Arrays.asList(0,writeSize, 0, 2*writeSize), acInt.getAllValues()); + +//verifyNoMoreInteractions(client); + + } + + @Test + public void verifyWriteRequest() throws Exception { + +AbfsClient client = mock(AbfsClient.class); +AbfsRestOperation op = mock(AbfsRestOperation.class); +AbfsConfiguration abfsConf; +final Configuration conf = new Configuration(); +conf.set(accountKey1, accountValue1); +abfsConf = new AbfsConfiguration(conf, accountName1); +AbfsPerfTracker tracker = new AbfsPerfTracker("test", accountName1, abfsConf); + +when(client.getAbfsPerfTracker()).thenReturn(tracker); +when(client.append(anyString(), anyLong(), any(byte[].class), anyInt(), anyInt(), anyBoolean(), anyBoolean())).thenReturn(op); + +AbfsOutputStream out = new AbfsOutputStream(client, path, 0, bufferSize, true, false, true,
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386485772 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java ## @@ -0,0 +1,268 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Random; +import org.junit.Assert; +import org.junit.Test; + +import org.mockito.ArgumentCaptor; +import static org.mockito.Mockito.*; Review comment: ideally, yes, though we are a bit more relaxed about static imports... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16818: Affects Version/s: 3.3.0 > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386485353 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java ## @@ -0,0 +1,349 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; Review comment: org.apache imports need to go into their own block just above any static imports, ordering imports ``` java., javax. -- other --- org.apache --- static ``` This is to try and keep cherry-picking under control. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386488149 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsOutputStream.java ## @@ -0,0 +1,349 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; +import org.apache.hadoop.fs.azurebfs.AbfsConfiguration; +import org.apache.hadoop.conf.Configuration; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Random; +import org.junit.Assert; +import org.junit.Test; + +import org.mockito.ArgumentCaptor; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoMoreInteractions; +import static org.mockito.Mockito.*; +/** + * Test useragent of abfs client. Review comment: check this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386481585 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java ## @@ -60,6 +61,12 @@ * documentation does not have such expectations of data being persisted. * Default value of this config is true. **/ public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = "fs.azure.disable.outputstream.flush"; + public static final String FS_AZURE_ENABLE_APPEND_WITH_FLUSH = "fs.azure.enable.appendwithflush"; + /** Provides a config control to disable or enable OutputStream Flush API Review comment: javadocs should be above the new option This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
steveloughran commented on a change in pull request #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#discussion_r386483373 ## File path: hadoop-tools/hadoop-azure/src/site/markdown/abfs.md ## @@ -643,6 +643,10 @@ Consult the javadocs for `org.apache.hadoop.fs.azurebfs.constants.ConfigurationK `org.apache.hadoop.fs.azurebfs.AbfsConfiguration` for the full list of configuration options and their default values. +### Append Blob Directories Options +### Config `fs.azure.appendblob.key` provides Review comment: should be nested (i.e. ); duplicate name will confuse link generation. Just cut the "a name" tag from the second line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm,
steveloughran commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r386483778 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java ## @@ -81,6 +81,20 @@ public void checkPermission(String fsOwner, String supergroup, } CALLED.add("checkPermission|" + ancestorAccess + "|" + parentAccess + "|" + access); } + + @Override + public void checkPermissionWithContext( + AuthorizationContext authzContext) throws AccessControlException { +if (authzContext.ancestorIndex > 1 +&& authzContext.inodes[1].getLocalName().equals("user") +&& authzContext.inodes[2].getLocalName().equals("acl")) { + this.ace.checkPermissionWithContext(authzContext); +} +CALLED.add("checkPermission|" + authzContext.ancestorAccess + "|" + +authzContext.parentAccess + "|" + authzContext.access); + } + + public void abc() {} Review comment: what does this do? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-583380344 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 24s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 :x: | mvninstall | 27m 52s | root in trunk failed. | | +1 :green_heart: | compile | 0m 35s | trunk passed | | +1 :green_heart: | checkstyle | 0m 35s | trunk passed | | +1 :green_heart: | mvnsite | 0m 50s | trunk passed | | +1 :green_heart: | shadedclient | 21m 13s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | trunk passed | | +0 :ok: | spotbugs | 0m 50s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 47s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 15s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 15s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 15s | hadoop-azure in the patch failed. | | -0 :warning: | checkstyle | 0m 14s | The patch fails to run checkstyle in hadoop-azure | | -1 :x: | mvnsite | 0m 16s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 38s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 19s | hadoop-azure in the patch failed. | | -1 :x: | findbugs | 0m 18s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 18s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 71m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 7745ce42e0ec 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / fafe78f | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/branch-mvninstall-root.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/testReport/ | | Max. process+thread count | 413 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-579239147 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 42m 7s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 32s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | trunk passed | | +1 :green_heart: | shadedclient | 16m 7s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 28s | trunk passed | | +0 :ok: | spotbugs | 1m 7s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 6s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 17s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 13s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 13s | hadoop-azure in the patch failed. | | -0 :warning: | checkstyle | 0m 13s | The patch fails to run checkstyle in hadoop-azure | | -1 :x: | mvnsite | 0m 18s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 8s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 16s | hadoop-azure in the patch failed. | | -1 :x: | findbugs | 0m 15s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 15s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 101m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux fb8d88d7c55a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3f01c48 | | Default Java | 1.8.0_232 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-585720440 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 56s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | trunk passed | | +1 :green_heart: | shadedclient | 14m 46s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 14s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 15s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 15s | hadoop-azure in the patch failed. | | -0 :warning: | checkstyle | 0m 12s | The patch fails to run checkstyle in hadoop-azure | | -1 :x: | mvnsite | 0m 16s | hadoop-azure in the patch failed. | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 51s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 19s | hadoop-azure in the patch failed. | | -1 :x: | findbugs | 0m 18s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 18s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 54m 48s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 2d8ef296c6b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / fe7d67a | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1790/out/maven-patch-checkstyle-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/testReport/ | | Max. process+thread count | 413 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/7/console | | versions
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-587440341 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 30m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 21s | trunk passed | | +1 :green_heart: | compile | 0m 28s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | trunk passed | | +1 :green_heart: | shadedclient | 16m 22s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 51s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 22s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 22s | hadoop-azure in the patch failed. | | -0 :warning: | checkstyle | 0m 15s | hadoop-tools/hadoop-azure: The patch generated 22 new + 5 unchanged - 0 fixed = 27 total (was 5) | | -1 :x: | mvnsite | 0m 23s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 52s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed | | -1 :x: | findbugs | 0m 27s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 28s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 92m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 9c898717ba62 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a562942 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-590325640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 3s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 :x: | mvninstall | 27m 16s | root in trunk failed. | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | trunk passed | | +1 :green_heart: | shadedclient | 14m 53s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed | | +1 :green_heart: | javac | 0m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-azure: The patch generated 23 new + 5 unchanged - 0 fixed = 28 total (was 5) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 47s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | -1 :x: | findbugs | 0m 55s | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 :x: | unit | 1m 22s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 66m 42s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | There is an apparent infinite recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:[line 124] | | Failed junit tests | hadoop.fs.azurebfs.services.TestAbfsOutputStream | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 2ae0881ac296 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b5698e0 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/artifact/out/branch-mvninstall-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/testReport/ | | Max. process+thread count | 467 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-590474367 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 33m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 46s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | trunk passed | | +1 :green_heart: | shadedclient | 17m 5s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 17s | hadoop-tools/hadoop-azure: The patch generated 23 new + 5 unchanged - 0 fixed = 28 total (was 5) | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 16m 23s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | -1 :x: | findbugs | 0m 57s | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 28s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 99m 38s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | There is an apparent infinite recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:[line 124] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux f37996238dfa 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42dfd27 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/12/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/12/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/12/testReport/ | | Max. process+thread count | 311 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/12/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob
hadoop-yetus removed a comment on issue #1790: [HADOOP-16818] ABFS: Combine append+flush calls for blockblob & appendblob URL: https://github.com/apache/hadoop/pull/1790#issuecomment-590257579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 54s | trunk passed | | +1 :green_heart: | compile | 0m 29s | trunk passed | | +1 :green_heart: | checkstyle | 0m 21s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 15m 21s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | trunk passed | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 52s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 16s | hadoop-tools/hadoop-azure: The patch generated 23 new + 5 unchanged - 0 fixed = 28 total (was 5) | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 1s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed | | -1 :x: | findbugs | 0m 58s | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 :x: | unit | 1m 22s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 58m 54s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | There is an apparent infinite recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:recursive loop in org.apache.hadoop.fs.azurebfs.services.AbfsClient.createDefaultHeaders() At AbfsClient.java:[line 124] | | Failed junit tests | hadoop.fs.azurebfs.services.TestAbfsOutputStream | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1790 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux cc0baa985310 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b5698e0 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1790/10/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp
hadoop-yetus removed a comment on issue #1794: HADOOP-15887: Add an option to avoid writing data locally in Distcp URL: https://github.com/apache/hadoop/pull/1794#issuecomment-571893263 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 10s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 44s | trunk passed | | +1 :green_heart: | compile | 0m 33s | trunk passed | | +1 :green_heart: | checkstyle | 0m 28s | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | trunk passed | | +1 :green_heart: | shadedclient | 17m 7s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | -0 :warning: | checkstyle | 0m 18s | hadoop-tools/hadoop-distcp: The patch generated 1 new + 113 unchanged - 2 fixed = 114 total (was 115) | | +1 :green_heart: | mvnsite | 0m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 28s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed | | +1 :green_heart: | findbugs | 0m 51s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 13m 34s | hadoop-distcp in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 81m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1794 | | JIRA Issue | HADOOP-15887 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4a750e828a67 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / a43c177 | | Default Java | 1.8.0_232 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/2/testReport/ | | Max. process+thread count | 449 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1794/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries
hadoop-yetus removed a comment on issue #1851: HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries URL: https://github.com/apache/hadoop/pull/1851#issuecomment-591902751 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 24s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | trunk passed | | +1 :green_heart: | shadedclient | 15m 5s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 57s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | -0 :warning: | checkstyle | 0m 20s | hadoop-tools/hadoop-aws: The patch generated 14 new + 23 unchanged - 0 fixed = 37 total (was 23) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 1s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 29s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 59m 1s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1851 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a83358620f82 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 2059f25 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/3/testReport/ | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1851/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
steveloughran edited a comment on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-593460073 closing as it merged in. thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
steveloughran commented on issue #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842#issuecomment-593460073 ok, This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface
steveloughran closed pull request #1842: HADOOP-16730 : ABFS: Add Authorizer Interface URL: https://github.com/apache/hadoop/pull/1842 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface
steveloughran commented on issue #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface URL: https://github.com/apache/hadoop/pull/1869#issuecomment-593459460 hey, can I get this in before hadoop 3.3.x ships; I don't want to make any commitment in that release to keep the current interface in Filesystem as is, because it needs to be tuned for object stores too, etc etc. thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface
steveloughran opened a new pull request #1869: HADOOP-16898. Batch listing of multiple directories to be an unstable interface URL: https://github.com/apache/hadoop/pull/1869 Contributed by Steve Loughran. Moves the API of HDFS-13616 into a interface which is implemented by DFS RPC filesystem client. This new interface, BatchListingOperations, is in hadoop-common, so applications do not need to be compiled with HDFS on the classpath. They must cast the FS into the interface. instanceof can probe the client for having the new interface -the patch also adds a new path capability to probe for this. The FileSystem implementation is cut; tests updated as appropriate. All new interfaces/classes/constants are marked as @Unstable. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049302#comment-17049302 ] Hadoop QA commented on HADOOP-16890: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 58s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 9m 55s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 55s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 38s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 15s{color} | {color:blue} hadoop-project has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests |
[GitHub] [hadoop] hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider
hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1866#issuecomment-593450148 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 39s | trunk passed | | +1 :green_heart: | compile | 17m 31s | trunk passed | | +1 :green_heart: | checkstyle | 2m 42s | trunk passed | | +1 :green_heart: | mvnsite | 1m 3s | trunk passed | | +1 :green_heart: | shadedclient | 20m 15s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | trunk passed | | +0 :ok: | spotbugs | 0m 58s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 28s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 37s | the patch passed | | -1 :x: | compile | 9m 55s | root in the patch failed. | | -1 :x: | javac | 9m 55s | root in the patch failed. | | -0 :warning: | checkstyle | 2m 38s | root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 45s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 22s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | | +0 :ok: | findbugs | 0m 15s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 14s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 9s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 100m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 7e9df810b74f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 0dd8956 | | Default Java | 1.8.0_242 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049265#comment-17049265 ] Hadoop QA commented on HADOOP-16890: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 59s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} branch/hadoop-project no findbugs output file (findbugsXml.xml) {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 4s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 4s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 38s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 15s{color} | {color:blue} hadoop-project has no data from findbugs {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests |
[GitHub] [hadoop] hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider
hadoop-yetus commented on issue #1866: HADOOP-16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1866#issuecomment-593435627 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 46s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 26s | trunk passed | | +1 :green_heart: | compile | 17m 51s | trunk passed | | +1 :green_heart: | checkstyle | 2m 46s | trunk passed | | +1 :green_heart: | mvnsite | 1m 6s | trunk passed | | +1 :green_heart: | shadedclient | 20m 41s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 56s | trunk passed | | +0 :ok: | spotbugs | 0m 59s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 24s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 39s | the patch passed | | -1 :x: | compile | 10m 4s | root in the patch failed. | | -1 :x: | javac | 10m 4s | root in the patch failed. | | -0 :warning: | checkstyle | 2m 38s | root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 47s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 6s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 39s | the patch passed | | +0 :ok: | findbugs | 0m 15s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 15s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 24s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | The patch does not generate ASF License warnings. | | | | 101m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1866 | | JIRA Issue | HADOOP-16890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 164f0c2cf8a7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1a636da | | Default Java | 1.8.0_242 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1866/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16898) Batch listing of multiple directories to be an unstable interface
Steve Loughran created HADOOP-16898: --- Summary: Batch listing of multiple directories to be an unstable interface Key: HADOOP-16898 URL: https://issues.apache.org/jira/browse/HADOOP-16898 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Steve Loughran HDFS-13616 added a new API for batch listing of multiple directories, but it isn't yet ready for tagging as stable & doesn't suit object stores. * the new API is pulled into a new interface marked unstable; * new classes (PartialListing) also tagged unstable. * Define a new path capability. HDFS will implement, but not filter/HarFS; it is an HDFS exclusive implementation for now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049218#comment-17049218 ] Hudson commented on HADOOP-16885: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18016 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18016/]) HADOOP-16885. Encryption zone file copy failure leaks a temp file (github: rev 0dd8956f2e4bd7cd2315ef23703e4b2da1a0d073) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 3.3.0 > > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org