[jira] [Updated] (HADOOP-16542) Update commons-beanutils version to 1.9.4
[ https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated HADOOP-16542: --- Fix Version/s: 3.2.2 3.1.4 > Update commons-beanutils version to 1.9.4 > - > > Key: HADOOP-16542 > URL: https://issues.apache.org/jira/browse/HADOOP-16542 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: kevin su >Priority: Major > Labels: release-blocker > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, > HADOOP-16542.003.patch > > > [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e] > {quote} > CVE-2019-10086. Apache Commons Beanutils does not suppresses the class > property in PropertyUtilsBean > by default. > Severity: Medium > Vendor: The Apache Software Foundation > Versions Affected: commons-beanutils-1.9.3 and earlier > Description: A special BeanIntrospector class was added in version 1.9.2. > This can be used to stop attackers from using the class property of > Java objects to get access to the classloader. > However this protection was not enabled by default. > PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class > level property access by default, thus protecting against > CVE-2014-0114. > Mitigation: 1.X users should migrate to 1.9.4. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16542) Update commons-beanutils version to 1.9.4
[ https://issues.apache.org/jira/browse/HADOOP-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943336#comment-16943336 ] Jonathan Hung commented on HADOOP-16542: Committed to branch-3.2/branch-3.1. > Update commons-beanutils version to 1.9.4 > - > > Key: HADOOP-16542 > URL: https://issues.apache.org/jira/browse/HADOOP-16542 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: kevin su >Priority: Major > Labels: release-blocker > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HADOOP-16542.001.patch, HADOOP-16542.002.patch, > HADOOP-16542.003.patch > > > [http://mail-archives.apache.org/mod_mbox/www-announce/201908.mbox/%3cc628798f-315d-4428-8cb1-4ed1ecc95...@apache.org%3e] > {quote} > CVE-2019-10086. Apache Commons Beanutils does not suppresses the class > property in PropertyUtilsBean > by default. > Severity: Medium > Vendor: The Apache Software Foundation > Versions Affected: commons-beanutils-1.9.3 and earlier > Description: A special BeanIntrospector class was added in version 1.9.2. > This can be used to stop attackers from using the class property of > Java objects to get access to the classloader. > However this protection was not enabled by default. > PropertyUtilsBean (and consequently BeanUtilsBean) now disallows class > level property access by default, thus protecting against > CVE-2014-0114. > Mitigation: 1.X users should migrate to 1.9.4. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
vivekratnavel commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537784252 +1 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943325#comment-16943325 ] Jonathan Hung commented on HADOOP-16588: Thx [~iwasakims] and [~weichiu]! > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Fix For: 2.10.0 > > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-16588: -- Fix Version/s: 2.10.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) committed to branch-2. > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Fix For: 2.10.0 > > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330854344 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -80,6 +81,20 @@ boolean removeStorage(DatanodeStorageInfo storage) { return true; } + @Override + boolean isProvided() { +int len = getCapacity(); +for(int idx = 0; idx < len; idx++) { Review comment: Fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330854328 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -80,6 +81,20 @@ boolean removeStorage(DatanodeStorageInfo storage) { return true; } + @Override + boolean isProvided() { +int len = getCapacity(); +for(int idx = 0; idx < len; idx++) { + DatanodeStorageInfo cur = getStorageInfo(idx); + if(cur != null) { Review comment: Fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16605) NPE in TestAdlSdkConfiguration failing in yetus
[ https://issues.apache.org/jira/browse/HADOOP-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943302#comment-16943302 ] Aaron Fabbri commented on HADOOP-16605: --- PR looks good to me. +1 > NPE in TestAdlSdkConfiguration failing in yetus > --- > > Key: HADOOP-16605 > URL: https://issues.apache.org/jira/browse/HADOOP-16605 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Sneha Vijayarajan >Priority: Major > > Yetus builds are failing with NPE in TestAdlSdkConfiguration if they go near > hadoop-azure-datalake. Assuming HADOOP-16438 until proven differently, though > HADOOP-16371 may have done something too (how?), something which wasn't > picked up as yetus didn't know that hadoo-azuredatalake was affected. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943300#comment-16943300 ] Hadoop QA commented on HADOOP-16625: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 24s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 0s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 45s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 36s{color} | {color:green} root generated 0 new + 1273 unchanged - 3 fixed = 1273 total (was 1276) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 14s{color} | {color:orange} root: The patch generated 17 new + 347 unchanged - 3 fixed = 364 total (was 350) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}258m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancer | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:080e9d0f9b3 | | JIRA Issue | HADOOP-16625 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982028/HADOOP-16625.branch-3.1.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943293#comment-16943293 ] Wei-Chiu Chuang commented on HADOOP-16588: -- +1 thank you. > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16531) Log more detail for slow RPC
[ https://issues.apache.org/jira/browse/HADOOP-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943289#comment-16943289 ] Chen Zhang commented on HADOOP-16531: - Thanks [~weichiu] for the commit. > Log more detail for slow RPC > > > Key: HADOOP-16531 > URL: https://issues.apache.org/jira/browse/HADOOP-16531 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HADOOP-16531.001.patch > > > Current implementation only log process time > {code:java} > if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) && > (processingTime > threeSigma)) { > LOG.warn("Slow RPC : {} took {} {} to process from client {}", > methodName, processingTime, RpcMetrics.TIMEUNIT, call); > rpcMetrics.incrSlowRpc(); > } > {code} > We need to log more details to help us locate the problem (eg. how long it > take to request lock, holding lock, or do other things) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943287#comment-16943287 ] Masatake Iwasaki commented on HADOOP-16588: --- +1. Thanks, [~jhung]. commons-beanutils-core looks excluded as expected. I will commit this shortly. {noformat} $ find hadoop-dist/target/hadoop-2.10.0-SNAPSHOT -name '*beanutils*' hadoop-dist/target/hadoop-2.10.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.9.4.jar hadoop-dist/target/hadoop-2.10.0-SNAPSHOT/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-beanutils-1.9.4.jar hadoop-dist/target/hadoop-2.10.0-SNAPSHOT/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/commons-beanutils-1.9.4.jar hadoop-dist/target/hadoop-2.10.0-SNAPSHOT/share/hadoop/yarn/lib/commons-beanutils-1.9.4.jar hadoop-dist/target/hadoop-2.10.0-SNAPSHOT/share/hadoop/tools/lib/commons-beanutils-1.9.4.jar {noformat} > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cxorm commented on issue #1570: HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in co…
cxorm commented on issue #1570: HDDS-2216. Rename HADOOP_RUNNER_VERSION to OZONE_RUNNER_VERSION in co… URL: https://github.com/apache/hadoop/pull/1570#issuecomment-537751265 Thanks @adoroszlai I am going to check unit test This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] cxorm commented on issue #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.
cxorm commented on issue #1559: HDDS-1737. Add Volume check in KeyManager and File Operations. URL: https://github.com/apache/hadoop/pull/1559#issuecomment-537750063 Yes, I will check it soon. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943271#comment-16943271 ] Hadoop QA commented on HADOOP-16588: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 8s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 58s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 34s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 36s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 5s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:da675796017 | | JIRA Issue | HADOOP-16588 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982030/HADOOP-16588-branch-2.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 6cd884a0020a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / c57e6bc3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222
[GitHub] [hadoop] hadoop-yetus commented on issue #1579: HDDS-2217 : Remove log4j and audit configuration from the docker-config files
hadoop-yetus commented on issue #1579: HDDS-2217 : Remove log4j and audit configuration from the docker-config files URL: https://github.com/apache/hadoop/pull/1579#issuecomment-537740219 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 107 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 40 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 41 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-hdds in trunk failed. | | -1 | compile | 15 | hadoop-ozone in trunk failed. | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 896 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 21 | hadoop-ozone in trunk failed. | | -0 | patch | 975 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. | | -1 | compile | 24 | hadoop-hdds in the patch failed. | | -1 | compile | 18 | hadoop-ozone in the patch failed. | | -1 | javac | 24 | hadoop-hdds in the patch failed. | | -1 | javac | 18 | hadoop-ozone in the patch failed. | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 831 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in the patch failed. | | -1 | javadoc | 22 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 32 | hadoop-hdds in the patch failed. | | -1 | unit | 30 | hadoop-ozone in the patch failed. | | +1 | asflicense | 37 | The patch does not generate ASF License warnings. | | | | 2425 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1579 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient shellcheck shelldocs | | uname | Linux 51d47d0280a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 4c24f24 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1579/1/artifact/out/patch-javadoc-hadoop-ozone.txt | | unit |
[GitHub] [hadoop] anuengineer commented on issue #1559: HDDS-1737. Add Volume check in KeyManager and File Operations.
anuengineer commented on issue #1559: HDDS-1737. Add Volume check in KeyManager and File Operations. URL: https://github.com/apache/hadoop/pull/1559#issuecomment-537734659 Can you please check the unit test failures? thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943258#comment-16943258 ] Hadoop QA commented on HADOOP-16588: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 51s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 17s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 9s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 21s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 32s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da675796017 | | JIRA Issue | HADOOP-16588 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982030/HADOOP-16588-branch-2.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 03ed7a06b123 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / c57e6bc3 | | maven | version: Apache Maven 3.3.9 | | Default Java |
[GitHub] [hadoop] goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330819935 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -80,6 +81,20 @@ boolean removeStorage(DatanodeStorageInfo storage) { return true; } + @Override + boolean isProvided() { +int len = getCapacity(); +for(int idx = 0; idx < len; idx++) { Review comment: space after the for This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
goiri commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330819984 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -80,6 +81,20 @@ boolean removeStorage(DatanodeStorageInfo storage) { return true; } + @Override + boolean isProvided() { +int len = getCapacity(); +for(int idx = 0; idx < len; idx++) { + DatanodeStorageInfo cur = getStorageInfo(idx); + if(cur != null) { Review comment: Space after the if This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1514: HDDS-2072. Make StorageContainerLocationProtocolService message based
anuengineer commented on issue #1514: HDDS-2072. Make StorageContainerLocationProtocolService message based URL: https://github.com/apache/hadoop/pull/1514#issuecomment-537729587 I have rebased and committed this change. Thank you for the contribution. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer closed pull request #1514: HDDS-2072. Make StorageContainerLocationProtocolService message based
anuengineer closed pull request #1514: HDDS-2072. Make StorageContainerLocationProtocolService message based URL: https://github.com/apache/hadoop/pull/1514 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections
[ https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16126: - Fix Version/s: 3.2.2 3.1.4 > ipc.Client.stop() may sleep too long to wait for all connections > > > Key: HADOOP-16126 > URL: https://issues.apache.org/jira/browse/HADOOP-16126 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.4, 3.2.2 > > Attachments: c16126_20190219.patch, c16126_20190220.patch, > c16126_20190221.patch > > > {code} > //Client.java > public void stop() { > ... > // wait until all connections are closed > while (!connections.isEmpty()) { > try { > Thread.sleep(100); > } catch (InterruptedException e) { > } > } > ... > } > {code} > In the code above, the sleep time is 100ms. We found that simply changing > the sleep time to 10ms could improve a Hive job running time by 10x. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] christeoh opened a new pull request #1579: HDDS-2217 : Remove log4j and audit configuration from the docker-config files
christeoh opened a new pull request #1579: HDDS-2217 : Remove log4j and audit configuration from the docker-config files URL: https://github.com/apache/hadoop/pull/1579 Removed redundant and potentially confusing LOG4J entries. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943238#comment-16943238 ] Hadoop QA commented on HADOOP-16152: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 41s{color} | {color:red} root generated 11 new + 1843 unchanged - 0 fixed = 1854 total (was 1843) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 30s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-runtime hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 47s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} hadoop-yarn-applications-catalog-webapp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1578: HDDS-2222 Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
hadoop-yetus commented on issue #1578: HDDS- Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C URL: https://github.com/apache/hadoop/pull/1578#issuecomment-537721939 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 56 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 960 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1055 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 30 | hadoop-hdds in trunk failed. | | -1 | findbugs | 21 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. | | -1 | compile | 26 | hadoop-hdds in the patch failed. | | -1 | compile | 18 | hadoop-ozone in the patch failed. | | -1 | javac | 26 | hadoop-hdds in the patch failed. | | -1 | javac | 18 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 10 new + 0 unchanged - 0 fixed = 10 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 749 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in the patch failed. | | -1 | javadoc | 18 | hadoop-ozone in the patch failed. | | -1 | findbugs | 32 | hadoop-hdds in the patch failed. | | -1 | findbugs | 19 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 26 | hadoop-ozone in the patch failed. | | +1 | asflicense | 31 | The patch does not generate ASF License warnings. | | | | 2479 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1578 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d12771a51d3b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b09d389 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1578/1/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[jira] [Commented] (HADOOP-16599) Allow a SignerInitializer to be specified along with a Custom Signer
[ https://issues.apache.org/jira/browse/HADOOP-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943234#comment-16943234 ] Hudson commented on HADOOP-16599: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17445 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17445/]) HADOOP-16599. Allow a SignerInitializer to be specified along with a (github: rev 559ee277f50716a9a8c736ba3b655aad9f616e96) * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestCustomSigner.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/TestSignerManager.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/SignerManager.java * (delete) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/SignerManager.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AwsSignerInitializer.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestSignerManager.java * (add) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/DelegationTokenProvider.java > Allow a SignerInitializer to be specified along with a Custom Signer > > > Key: HADOOP-16599 > URL: https://issues.apache.org/jira/browse/HADOOP-16599 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Siddharth Seth >Assignee: Siddharth Seth >Priority: Major > Fix For: 3.3.0 > > > HADOOP-16445 added support for custom signers. This is a follow up to allow > for an Initializer to be specified along with the Custom Signer, for any > initialization etc that is required by the custom signer specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
hadoop-yetus commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537720953 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 75 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 28 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. | | -1 | compile | 19 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 46 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 920 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in trunk failed. | | -1 | javadoc | 17 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1007 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 29 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 16 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 16 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 52 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 793 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 28 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 24 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 2448 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1577 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 50632eedcc48 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 53ed78b | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/2/artifact/out/patch-compile-hadoop-ozone.txt | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1571: HDDS-2228. Fix NPE in OzoneDelegationTokenManager#addPersistedDelegat…
hadoop-yetus commented on issue #1571: HDDS-2228. Fix NPE in OzoneDelegationTokenManager#addPersistedDelegat… URL: https://github.com/apache/hadoop/pull/1571#issuecomment-537717775 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 88 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 19 | Maven dependency ordering for branch | | -1 | mvninstall | 45 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 43 | hadoop-ozone in trunk failed. | | -1 | compile | 20 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 53 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 989 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1085 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 33 | hadoop-hdds in trunk failed. | | -1 | findbugs | 19 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 17 | Maven dependency ordering for patch | | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. | | -1 | compile | 23 | hadoop-hdds in the patch failed. | | -1 | compile | 18 | hadoop-ozone in the patch failed. | | -1 | javac | 23 | hadoop-hdds in the patch failed. | | -1 | javac | 18 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 804 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in the patch failed. | | -1 | javadoc | 17 | hadoop-ozone in the patch failed. | | -1 | findbugs | 30 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 24 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 2640 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1571 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 15bb956d4ec8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 53ed78b | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1571/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[jira] [Resolved] (HADOOP-16599) Allow a SignerInitializer to be specified along with a Custom Signer
[ https://issues.apache.org/jira/browse/HADOOP-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth resolved HADOOP-16599. - Fix Version/s: 3.3.0 Resolution: Fixed > Allow a SignerInitializer to be specified along with a Custom Signer > > > Key: HADOOP-16599 > URL: https://issues.apache.org/jira/browse/HADOOP-16599 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Siddharth Seth >Assignee: Siddharth Seth >Priority: Major > Fix For: 3.3.0 > > > HADOOP-16445 added support for custom signers. This is a follow up to allow > for an Initializer to be specified along with the Custom Signer, for any > initialization etc that is required by the custom signer specified. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sidseth merged pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a
sidseth merged pull request #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a URL: https://github.com/apache/hadoop/pull/1516 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sidseth commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a
sidseth commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a URL: https://github.com/apache/hadoop/pull/1516#issuecomment-537717273 Fixed the merge conflicts, and have run mvn javadoc:javadoc successfully. Tests run again against us-east-2. Usual failures + ITestRestrictedReadAccess (which fails with and without the patch). Filed HADOOP-16626. Thanks for the review. Merging the changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16626) S3A ITestRestrictedReadAccess fails
Siddharth Seth created HADOOP-16626: --- Summary: S3A ITestRestrictedReadAccess fails Key: HADOOP-16626 URL: https://issues.apache.org/jira/browse/HADOOP-16626 Project: Hadoop Common Issue Type: Test Components: fs/s3 Reporter: Siddharth Seth Just tried running the S3A test suite. Consistently seeing the following. Command used {code} mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=12 -Ds3guard -Dauth -Ddynamo -Dtest=moo -Dit.test=ITestRestrictedReadAccess {code} cc [~ste...@apache.org] {code} --- Test set: org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess --- Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 5.335 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess) Time elapsed: 2.841 s <<< ERROR! java.nio.file.AccessDeniedException: test/testNoReadAccess-raw/noReadDir/emptyDir/: getFileStatus on test/testNoReadAccess-raw/noReadDir/emptyDir/: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=:403 Forbidden at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:244) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2777) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2705) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2589) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:2377) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$10(S3AFileSystem.java:2356) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:2356) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:360) at org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:282) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FE8B4D6F25648BCD; S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk=), S3 Extended Request ID: hgUHzFskU9CcEUT3DxgAkYcWLl6vFoa1k7qXX29cx1u3lpl7RVsWr5rp27/B8s5yjmWvvi6hVgk= at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726) at
[GitHub] [hadoop] bharatviswa504 commented on issue #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA.
bharatviswa504 commented on issue #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA. URL: https://github.com/apache/hadoop/pull/1489#issuecomment-537711587 Thank You @xiaoyuyao for the review. I will commit this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA.
bharatviswa504 merged pull request #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA. URL: https://github.com/apache/hadoop/pull/1489 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szetszwo opened a new pull request #1578: HDDS-2222 Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C
szetszwo opened a new pull request #1578: HDDS- Add a method to update ByteBuffer in PureJavaCrc32/PureJavaCrc32C URL: https://github.com/apache/hadoop/pull/1578 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA.
xiaoyuyao commented on issue #1489: HDDS-2019. Handle Set DtService of token in S3Gateway for OM HA. URL: https://github.com/apache/hadoop/pull/1489#issuecomment-537710142 LGTM, +1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15956) Use relative resource URLs across WebUI components
[ https://issues.apache.org/jira/browse/HADOOP-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943211#comment-16943211 ] David Mollitor commented on HADOOP-15956: - Discussion on this here: https://serverfault.com/questions/561892/how-to-handle-relative-urls-correctly-with-a-reverse-proxy > Use relative resource URLs across WebUI components > -- > > Key: HADOOP-15956 > URL: https://issues.apache.org/jira/browse/HADOOP-15956 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Greg Phillips >Assignee: Greg Phillips >Priority: Minor > Attachments: HADOOP-15956.001.patch > > > Similar to HDFS-12961 there are absolute paths used for static resources in > the WebUI for HDFS & KMS which can cause issues when attempting to access > these pages via a reverse proxy. Using relative paths in all WebUI components > will allow pages to render properly when using a reverse proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943206#comment-16943206 ] Siyao Meng edited comment on HADOOP-16152 at 10/2/19 10:25 PM: --- The DataNode jetty server max thread issue mentioned [above|https://issues.apache.org/jira/browse/HADOOP-16152?focusedCommentId=16942499=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16942499] can be solved by adding 1 to *DatanodeHttpServer#HTTP_MAX_THREADS*: {code} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java index 86672b403c9..9819fafe291 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java @@ -83,8 +83,9 @@ // set them to the minimum possible private static final int HTTP_SELECTOR_THREADS = 1; private static final int HTTP_ACCEPTOR_THREADS = 1; + // jetty 9.4: add one extra max thread private static final int HTTP_MAX_THREADS = - HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1; + HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1 + 1; private final HttpServer2 infoServer; private final EventLoopGroup bossGroup; private final EventLoopGroup workerGroup; {code} DataNode works after the change with jetty 9.4 on my Mac. Will post a 003 patch after the previous jenkins run finishes. was (Author: smeng): The DataNode jetty server max thread issue can be solved by adding 1 to DatanodeHttpServer#HTTP_MAX_THREADS: {code} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java index 86672b403c9..9819fafe291 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java @@ -83,8 +83,9 @@ // set them to the minimum possible private static final int HTTP_SELECTOR_THREADS = 1; private static final int HTTP_ACCEPTOR_THREADS = 1; + // jetty 9.4: add one extra max thread private static final int HTTP_MAX_THREADS = - HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1; + HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1 + 1; private final HttpServer2 infoServer; private final EventLoopGroup bossGroup; private final EventLoopGroup workerGroup; {code} DataNode works after the change with jetty 9.4 on my Mac. Will post a 003 patch after the previous jenkins run finishes. > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943206#comment-16943206 ] Siyao Meng commented on HADOOP-16152: - The DataNode jetty server max thread issue can be solved by adding 1 to DatanodeHttpServer#HTTP_MAX_THREADS: {code} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java index 86672b403c9..9819fafe291 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java @@ -83,8 +83,9 @@ // set them to the minimum possible private static final int HTTP_SELECTOR_THREADS = 1; private static final int HTTP_ACCEPTOR_THREADS = 1; + // jetty 9.4: add one extra max thread private static final int HTTP_MAX_THREADS = - HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1; + HTTP_SELECTOR_THREADS + HTTP_ACCEPTOR_THREADS + 1 + 1; private final HttpServer2 infoServer; private final EventLoopGroup bossGroup; private final EventLoopGroup workerGroup; {code} DataNode works after the change with jetty 9.4 on my Mac. Will post a 003 patch after the previous jenkins run finishes. > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER.
bharatviswa504 commented on issue #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER. URL: https://github.com/apache/hadoop/pull/1567#issuecomment-537705435 Thank You @arp7 for the review. I will commit this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 edited a comment on issue #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER.
bharatviswa504 edited a comment on issue #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER. URL: https://github.com/apache/hadoop/pull/1567#issuecomment-537705435 Thank You @arp7 for the review. Test failures are not related to this patch. I will commit this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER.
bharatviswa504 merged pull request #1567: HDDS-2224. Fix loadup cache for cache cleanup policy NEVER. URL: https://github.com/apache/hadoop/pull/1567 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a
hadoop-yetus commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a URL: https://github.com/apache/hadoop/pull/1516#issuecomment-537703095 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1088 | trunk passed | | +1 | compile | 35 | trunk passed | | +1 | checkstyle | 29 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 793 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 30 | trunk passed | | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 57 | trunk passed | | -0 | patch | 85 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 33 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 11 new + 10 unchanged - 2 fixed = 21 total (was 12) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 778 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 63 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 72 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3294 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1516 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 230d3f683ca4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 685918e | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/4/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 merged pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
bharatviswa504 merged pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
bharatviswa504 commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537702879 Test failures are not related to this patch. Thank You @arp7 and @anuengineer for the review. I will commit this to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
anuengineer commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537702251 I agree, once you do the standard sanity checks; I think we should go ahead and commit. Thank you for working on this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
hadoop-yetus commented on issue #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#issuecomment-537702274 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 50 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1415 | trunk passed | | +1 | compile | 72 | trunk passed | | +1 | checkstyle | 53 | trunk passed | | +1 | mvnsite | 81 | trunk passed | | +1 | shadedclient | 985 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 82 | trunk passed | | 0 | spotbugs | 205 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 202 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 73 | the patch passed | | +1 | compile | 64 | the patch passed | | +1 | javac | 64 | the patch passed | | +1 | checkstyle | 43 | the patch passed | | +1 | mvnsite | 71 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | -1 | shadedclient | 272 | patch has errors when building and testing our client artifacts. | | +1 | javadoc | 90 | the patch passed | | -1 | findbugs | 26 | hadoop-hdfs in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 25 | hadoop-hdfs in the patch failed. | | 0 | asflicense | 14 | ASF License check generated no output? | | | | 3669 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1573 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ff81f9dd94c3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 685918e | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/2/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/2/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1573/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG.
elek commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG. URL: https://github.com/apache/hadoop/pull/1572#issuecomment-537701365 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943185#comment-16943185 ] Jonathan Hung commented on HADOOP-16588: Attached 002 patch based on [~iwasakims]'s comment. > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16588) Update commons-beanutils version to 1.9.4 in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated HADOOP-16588: --- Attachment: HADOOP-16588-branch-2.002.patch > Update commons-beanutils version to 1.9.4 in branch-2 > - > > Key: HADOOP-16588 > URL: https://issues.apache.org/jira/browse/HADOOP-16588 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: release-blocker > Attachments: HADOOP-16588-branch-2.002.patch, > HADOOP-16588.branch-2.001.patch > > > Similar to HADOOP-16542 but we need to do it differently. > In branch-2, we pull in commons-beanutils through commons-configuration 1.6 > --> commons-digester 1.8 > {noformat} > [INFO] +- commons-configuration:commons-configuration:jar:1.6:compile > [INFO] | +- commons-digester:commons-digester:jar:1.8:compile > [INFO] | | \- commons-beanutils:commons-beanutils:jar:1.7.0:compile > [INFO] | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile > {noformat} > I have a patch to update version of the transitive dependency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943184#comment-16943184 ] Hadoop QA commented on HADOOP-16624: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} hadolint {color} | {color:green} 0m 2s{color} | {color:green} There were no new hadolint issues. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 15s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 | | JIRA Issue | HADOOP-16624 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982027/HADOOP-16624.001.patch | | Optional Tests | dupname asflicense hadolint shellcheck shelldocs | | uname | Linux 871f8cb969cf 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 685918e | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16563/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943180#comment-16943180 ] Wei-Chiu Chuang commented on HADOOP-16625: -- [~aajisaka] how do you think? > Backport HADOOP-14624 to branch-3.1 > --- > > Key: HADOOP-16625 > URL: https://issues.apache.org/jira/browse/HADOOP-16625 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-16625.branch-3.1.001.patch > > > I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of > them do not compile because of the commons-logging to slf4j migration. > One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger > API. > Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the > DelayAnswer signature, but it's in the test scope, so we're not really > breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek closed pull request #990: HDDS-1554. Create disk tests for fault injection test
elek closed pull request #990: HDDS-1554. Create disk tests for fault injection test URL: https://github.com/apache/hadoop/pull/990 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16625: - Status: Patch Available (was: Open) > Backport HADOOP-14624 to branch-3.1 > --- > > Key: HADOOP-16625 > URL: https://issues.apache.org/jira/browse/HADOOP-16625 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-16625.branch-3.1.001.patch > > > I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of > them do not compile because of the commons-logging to slf4j migration. > One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger > API. > Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the > DelayAnswer signature, but it's in the test scope, so we're not really > breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16625: - Attachment: HADOOP-16625.branch-3.1.001.patch > Backport HADOOP-14624 to branch-3.1 > --- > > Key: HADOOP-16625 > URL: https://issues.apache.org/jira/browse/HADOOP-16625 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-16625.branch-3.1.001.patch > > > I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of > them do not compile because of the commons-logging to slf4j migration. > One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger > API. > Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the > DelayAnswer signature, but it's in the test scope, so we're not really > breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-16625: Assignee: Wei-Chiu Chuang > Backport HADOOP-14624 to branch-3.1 > --- > > Key: HADOOP-16625 > URL: https://issues.apache.org/jira/browse/HADOOP-16625 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-16625.branch-3.1.001.patch > > > I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of > them do not compile because of the commons-logging to slf4j migration. > One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger > API. > Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the > DelayAnswer signature, but it's in the test scope, so we're not really > breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16625) Backport HADOOP-14624 to branch-3.1
Wei-Chiu Chuang created HADOOP-16625: Summary: Backport HADOOP-14624 to branch-3.1 Key: HADOOP-16625 URL: https://issues.apache.org/jira/browse/HADOOP-16625 Project: Hadoop Common Issue Type: Improvement Reporter: Wei-Chiu Chuang I am trying to bring commits from trunk/branch-3.2 to branch-3.1, but some of them do not compile because of the commons-logging to slf4j migration. One of the issue is GenericTestUtils.DelayAnswer do not accept slf4j logger API. Backport HADOOP-14624 to branch-3.1 to make backport easier. It updates the DelayAnswer signature, but it's in the test scope, so we're not really breaking backward compat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943171#comment-16943171 ] Hadoop QA commented on HADOOP-16624: (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/16563/console in case of problems. > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942499#comment-16942499 ] Siyao Meng edited comment on HADOOP-16152 at 10/2/19 9:07 PM: -- [~weichiu] -I applied the rev 002 patch locally on latest trunk. The compile (mvn install -Pdist -DskipTests -e -Dmaven.javadoc.skip=true) also succeeded for me. No such "org.eclipse.jetty.server" deprecation warnings. BUT the NameNode / DataNode will fail to start, possibly due to the incorrect shading. Hmm.- {code:title=NameNode failed to start w/ patch rev 002} ... 2019-10-01 22:16:10,492 ERROR namenode.NameNode: Failed to start namenode. java.io.IOException: Unable to initialize WebAppContext at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1185) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:917) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:985) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:958) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1727) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1792) Caused by: java.lang.NoSuchMethodError: org.eclipse.jetty.server.ResourceContentFactory.(Lorg/eclipse/jetty/util/resource/ResourceFactory;Lorg/eclipse/jetty/http/MimeTypes;Z)V at org.eclipse.jetty.servlet.DefaultServlet.init(DefaultServlet.java:293) at javax.servlet.GenericServlet.init(GenericServlet.java:244) ... {code} It turns out I forgot to include "clean" command in maven so the resulting distro includes two versions of jetty. And NN possibly picks up the wrong version. With command *mvn clean install -Pdist -DskipTests -e -Dmaven.javadoc.skip=true*, the NN will start normally now. But the DN would fail due to some incorrect jetty configs, guess I need to tune that somehow: {code} 2019-10-02 14:01:58,227 INFO thread.ThreadPoolBudget: SelectorManager@ServerConnector@68034211{HTTP/1.1,[http/1.1]}{localhost:50619} requires 1 threads from QueuedThreadPool[qtp1396431506]@533bda92{STARTED,3<=3<=3,i=3,r=1,q=0}[ReservedThreadExecutor@1fa1cab1{s=0/1,p=0}] 2019-10-02 14:01:58,229 INFO datanode.DataNode: Waiting up to 30 seconds for transfer threads to complete 2019-10-02 14:01:58,229 INFO datanode.DataNode: Gracefully shutting down executor service. Waiting max 15 SECONDS 2019-10-02 14:01:58,229 INFO datanode.DataNode: Succesfully shutdown executor service 2019-10-02 14:01:58,230 INFO datanode.DataNode: Shutdown complete. 2019-10-02 14:01:58,231 ERROR datanode.DataNode: Exception in secureMain java.io.IOException: Problem starting http server at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1194) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:141) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:978) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1438) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:513) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2843) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2749) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2793) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2937) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2961) Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=3 < max=3 for QueuedThreadPool[qtp1396431506]@533bda92{STARTED,3<=3<=3,i=3,r=1,q=0}[ReservedThreadExecutor@1fa1cab1{s=0/1,p=0}] at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:156) at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130) at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:182) at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:255) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110) at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:283) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81) at
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330771207 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java ## @@ -244,6 +244,11 @@ final boolean hasNoStorage() { return true; } + @Override + boolean isProvided() { +return false; Review comment: Added the javadoc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330770913 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -77,9 +85,31 @@ boolean removeStorage(DatanodeStorageInfo storage) { setStorageInfo(dnIndex, getStorageInfo(lastNode)); // set the last entry to null setStorageInfo(lastNode, null); +if (storage.getStorageType() == StorageType.PROVIDED +&& !hasProvidedStorages()) { Review comment: I removed the check and there is no invocation in the regular pipeline now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330770722 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -28,12 +29,16 @@ @InterfaceAudience.Private public class BlockInfoContiguous extends BlockInfo { + private boolean hasProvidedStorage; + public BlockInfoContiguous(short size) { super(size); +hasProvidedStorage = false; Review comment: I removed it for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330771013 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -77,9 +85,31 @@ boolean removeStorage(DatanodeStorageInfo storage) { setStorageInfo(dnIndex, getStorageInfo(lastNode)); // set the last entry to null setStorageInfo(lastNode, null); +if (storage.getStorageType() == StorageType.PROVIDED +&& !hasProvidedStorages()) { + hasProvidedStorage = false; +} return true; } + @Override + boolean isProvided() { +return hasProvidedStorage; + } + + private boolean hasProvidedStorages() { Review comment: Added the javadoc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage.
ashvina commented on a change in pull request #1573: HDFS-14889. Ability to check if a block has a replica on provided storage. URL: https://github.com/apache/hadoop/pull/1573#discussion_r330771013 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java ## @@ -77,9 +85,31 @@ boolean removeStorage(DatanodeStorageInfo storage) { setStorageInfo(dnIndex, getStorageInfo(lastNode)); // set the last entry to null setStorageInfo(lastNode, null); +if (storage.getStorageType() == StorageType.PROVIDED +&& !hasProvidedStorages()) { + hasProvidedStorage = false; +} return true; } + @Override + boolean isProvided() { +return hasProvidedStorage; + } + + private boolean hasProvidedStorages() { Review comment: Added the javadoc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kevin su updated HADOOP-16624: -- Status: Patch Available (was: Open) > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16624) Upgrade hugo to the latest version in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] kevin su updated HADOOP-16624: -- Attachment: HADOOP-16624.001.patch > Upgrade hugo to the latest version in Dockerfile > > > Key: HADOOP-16624 > URL: https://issues.apache.org/jira/browse/HADOOP-16624 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: kevin su >Priority: Minor > Attachments: HADOOP-16624.001.patch > > > In Dockerfile, the hugo version is 0.30.2. Now the latest hugo version is > 0.58.3. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-16152: --- Description: Some big data projects have been upgraded Jetty to 9.4.x, which causes some compatibility issues. Spark: [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] Hive: HIVE-21211 was: Some big data projects have been upgraded Jetty to 9.4.x, which causes some compatibility issues. Spark: [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] Hive: https://issues.apache.org/jira/browse/HIVE-21211 > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
bharatviswa504 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r330760039 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { + writeLock(resource); + } + + /** + * Releases the lock on given resource. + * + * @param resource for which the lock has to be released + * @deprecated Use {@link LockManager#writeUnlock} instead + */ + public void unlock(final R resource) { + writeUnlock(resource); + } + + /** + * Acquires the read lock on given resource. + * + * Acquires the read lock on resource if the write lock is not held by + * another thread and returns immediately. + * + * If the write lock on resource is held by another thread then + * the current thread becomes disabled for thread scheduling + * purposes and lies dormant until the read lock has been acquired. + * + * @param resource on which the read lock has to be acquired + */ + public void readLock(final R resource) { +acquire(resource, ActiveLock::readLock); + } + + /** + * Releases the read lock on given resource. + * + * @param resource for which the read lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void readUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::readUnlock); + } + + /** + * Acquires the write lock on given resource. + * + * Acquires the write lock on resource if neither the read nor write lock + * are held by another thread and returns immediately. + * + * If the current thread already holds the write lock then the + * hold count is incremented by one and the method returns + * immediately. + * + * If the lock is held by another thread then the current + * thread becomes disabled for thread scheduling purposes and + * lies dormant until the write lock has been acquired. + * + * @param resource on which the lock has to be acquired */ - public void lock(T resource) { -activeLocks.compute(resource, (k, v) -> { - ActiveLock lock; + public void writeLock(final R resource) { +acquire(resource, ActiveLock::writeLock); + } + + /** + * Releases the write lock on given resource. + * + * @param resource for which the lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void writeUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::writeUnlock); + } + + /** + * Acquires the lock on given resource using the provided lock function. + * + * @param resource on which the lock has to be acquired + * @param lockFn function to acquire the lock + */ + private void acquire(final R resource, final Consumer lockFn) { +lockFn.accept(getLockForLocking(resource)); Review comment: Understood. Thanks for the clear explanation. If it is not much work, can we add some comments in code it will be easy when reading? (If you think this is obvious, you can leave it).
[GitHub] [hadoop] hadoop-yetus commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
hadoop-yetus commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537670279 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 16 | Maven dependency ordering for branch | | -1 | mvninstall | 29 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. | | -1 | compile | 23 | hadoop-hdds in trunk failed. | | -1 | compile | 14 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 60 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1038 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1137 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 35 | hadoop-hdds in trunk failed. | | -1 | findbugs | 19 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 18 | Maven dependency ordering for patch | | -1 | mvninstall | 37 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 41 | hadoop-ozone in the patch failed. | | -1 | compile | 25 | hadoop-hdds in the patch failed. | | -1 | compile | 18 | hadoop-ozone in the patch failed. | | -1 | javac | 25 | hadoop-hdds in the patch failed. | | -1 | javac | 18 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 64 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 875 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 36 | hadoop-hdds in the patch failed. | | -1 | findbugs | 20 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 29 | hadoop-hdds in the patch failed. | | -1 | unit | 27 | hadoop-ozone in the patch failed. | | +1 | asflicense | 35 | The patch does not generate ASF License warnings. | | | | 2751 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1511 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 865a7f64ca11 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 685918e | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/5/artifact/out/patch-compile-hadoop-hdds.txt | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
hadoop-yetus commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537669582 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 82 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 48 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 41 | hadoop-ozone in trunk failed. | | -1 | compile | 19 | hadoop-hdds in trunk failed. | | -1 | compile | 13 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 60 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 958 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 19 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1048 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 33 | hadoop-hdds in trunk failed. | | -1 | findbugs | 17 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 52 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 806 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in the patch failed. | | -1 | javadoc | 18 | hadoop-ozone in the patch failed. | | -1 | findbugs | 31 | hadoop-hdds in the patch failed. | | -1 | findbugs | 19 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 27 | hadoop-hdds in the patch failed. | | -1 | unit | 24 | hadoop-ozone in the patch failed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 2561 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1577 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 64c3bf470a41 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 685918e | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1577/1/artifact/out/patch-compile-hadoop-ozone.txt | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager.
hadoop-yetus commented on issue #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#issuecomment-537668863 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 71 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 46 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. | | -1 | compile | 19 | hadoop-hdds in trunk failed. | | -1 | compile | 12 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 58 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 952 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in trunk failed. | | -1 | javadoc | 16 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 1042 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 34 | hadoop-hdds in trunk failed. | | -1 | findbugs | 16 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 30 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. | | -1 | compile | 21 | hadoop-hdds in the patch failed. | | -1 | compile | 15 | hadoop-ozone in the patch failed. | | -1 | javac | 21 | hadoop-hdds in the patch failed. | | -1 | javac | 15 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 51 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 798 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 20 | hadoop-hdds in the patch failed. | | -1 | javadoc | 16 | hadoop-ozone in the patch failed. | | -1 | findbugs | 100 | hadoop-hdds in the patch failed. | | -1 | findbugs | 17 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 23 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 2579 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1564 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d6eb97a38316 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 685918e | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1564/2/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc |
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1486: HDDS-2158. Fixing Json Injection Issue in JsonUtils.
bharatviswa504 commented on a change in pull request #1486: HDDS-2158. Fixing Json Injection Issue in JsonUtils. URL: https://github.com/apache/hadoop/pull/1486#discussion_r330754661 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/RemoveAclBucketHandler.java ## @@ -92,8 +92,9 @@ public Void call() throws Exception { boolean result = client.getObjectStore().removeAcl(obj, OzoneAcl.parseAcl(acl)); -System.out.printf("%s%n", JsonUtils.toJsonStringWithDefaultPrettyPrinter( -JsonUtils.toJsonString("Acl removed successfully: " + result))); +System.out.printf("%s%n", result ? "ACL removed successfully" : +"ACL not removed"); Review comment: From my understanding, addAcl behavior is if acl is added successfully returns true, it will return false when acl trying to be added already exists. > If we are trying to add an already existing ACL, shouldn't we return true? I think returning true is not right behavior, as it will not be clear whether add is successful or not. We should have returned with clear message to end user, what is differenece between true/false. `But I think that statement also does not convey the message properly. ` Agreed this was existing behavior, if you want to fix in a new Jira I am okay with that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
avijayanhwx commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537656726 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943121#comment-16943121 ] Siyao Meng commented on HADOOP-16152: - [~yumwang] I can take over this one if you are not actively working on this. If that's okay. As I'm not a committer I have to temporarily assign the issue to myself and upload the patch to trigger the jenkins (the previous results on rev 002 patch expired). > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: https://issues.apache.org/jira/browse/HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng reassigned HADOOP-16152: --- Assignee: Siyao Meng (was: Yuming Wang) > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: https://issues.apache.org/jira/browse/HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16152: Attachment: HADOOP-16152.002.patch Status: Patch Available (was: In Progress) Retriggering jenkins on the same patch rev 002 by [~weichiu] > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, > HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: https://issues.apache.org/jira/browse/HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x
[ https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16152: Status: In Progress (was: Patch Available) > Upgrade Eclipse Jetty version to 9.4.x > -- > > Key: HADOOP-16152 > URL: https://issues.apache.org/jira/browse/HADOOP-16152 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Yuming Wang >Assignee: Yuming Wang >Priority: Major > Attachments: HADOOP-16152.002.patch, HADOOP-16152.v1.patch > > > Some big data projects have been upgraded Jetty to 9.4.x, which causes some > compatibility issues. > Spark: > [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146] > Calcite: > [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87] > Hive: https://issues.apache.org/jira/browse/HIVE-21211 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
avijayanhwx commented on issue #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577#issuecomment-537653624 cc @vivekratnavel / @shwetayakkali / @swagle This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a
steveloughran commented on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a URL: https://github.com/apache/hadoop/pull/1516#issuecomment-537653096 patch LGTM, +1 once you fix whatever merge conflicts have crept in (Constants, inevitably) regarding instrumentation, it'd make sense to have some interface for the signers to invoke with some signed/rejected counters; we'd have an implementation in S3AInstrumentation which would be the one normally passed down. Now, if we also wanted to track signing latency, that would be fun -and it might something we'd always want to track, given the various extension points for auth which exist (AWS IAM stuff, our DT plugins, etc) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx opened a new pull request #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly.
avijayanhwx opened a new pull request #1577: HDDS-2200 : Recon does not handle the NULL snapshot from OM DB cleanly. URL: https://github.com/apache/hadoop/pull/1577 - Fix NULL OM snapshot handling in Recon. - Bootstrap Recon startup with last known OM snapshot DB and Recon container DB. - Add more useful log lines. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a
hadoop-yetus removed a comment on issue #1516: HADOOP-16599. Allow a SignerInitializer to be specified along with a URL: https://github.com/apache/hadoop/pull/1516#issuecomment-534696847 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1050 | trunk passed | | +1 | compile | 35 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 788 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 30 | trunk passed | | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 56 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 769 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 62 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 69 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3241 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1516 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 18ec9c9f4269 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / afa1006 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/1/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1516/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
bharatviswa504 commented on issue #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#issuecomment-537648780 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#discussion_r330735520 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws IOException, AuthenticationException { super(OzoneVersionInfo.OZONE_VERSION_INFO); Preconditions.checkNotNull(conf); -configuration = conf; +configuration = new OzoneConfiguration(conf); Review comment: Thanks for the catch and for bringing this up. This has caused test failure too. Reverted this back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config.
bharatviswa504 commented on a change in pull request #1511: HDDS-2162. Make OM Generic related configuration support HA style config. URL: https://github.com/apache/hadoop/pull/1511#discussion_r330735520 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -309,13 +305,33 @@ private OzoneManager(OzoneConfiguration conf) throws IOException, AuthenticationException { super(OzoneVersionInfo.OZONE_VERSION_INFO); Preconditions.checkNotNull(conf); -configuration = conf; +configuration = new OzoneConfiguration(conf); Review comment: Thanks for the catch. This has caused test failure too. Reverted this back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode.
hadoop-yetus commented on issue #1540: HDDS-2198. SCM should not consider containers in CLOSING state to come out of safemode. URL: https://github.com/apache/hadoop/pull/1540#issuecomment-537647448 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 32 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-hdds in trunk failed. | | -1 | compile | 15 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 60 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 851 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 21 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 967 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 46 | hadoop-hdds in trunk failed. | | -1 | findbugs | 21 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. | | -1 | compile | 25 | hadoop-hdds in the patch failed. | | -1 | compile | 19 | hadoop-ozone in the patch failed. | | -1 | javac | 25 | hadoop-hdds in the patch failed. | | -1 | javac | 19 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | +1 | shadedclient | 721 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 31 | hadoop-hdds in the patch failed. | | -1 | findbugs | 21 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 26 | hadoop-ozone in the patch failed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 2370 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1540 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e6961ad387b9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / e8ae632 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[GitHub] [hadoop] anuengineer commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom.
anuengineer commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom. URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537646083 @dineshchitlangia Thank you for the review. I have committed this patch to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer merged pull request #1574: HDDS-2227. GDPR key generation could benefit from secureRandom.
anuengineer merged pull request #1574: HDDS-2227. GDPR key generation could benefit from secureRandom. URL: https://github.com/apache/hadoop/pull/1574 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom.
dineshchitlangia commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom. URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537645815 +1 LGTM, failures dont seem related to patch. Thanks Anu. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom.
anuengineer commented on issue #1574: HDDS-2227. GDPR key generation could benefit from secureRandom. URL: https://github.com/apache/hadoop/pull/1574#issuecomment-537645817 The failures are not related to this patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract
[ https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943098#comment-16943098 ] Steve Loughran commented on HADOOP-15871: - just looked at ABFS input streams here: {code:java} * * This is to match the behavior of DFSInputStream.available(), * which some clients may rely on (HBase write-ahead log reading in * particular)." {code} If that is true (and given the gzip issues) I'm going to have to make this a WONTFIX. Sorry > Some input streams does not obey "java.io.InputStream.available" contract > -- > > Key: HADOOP-15871 > URL: https://issues.apache.org/jira/browse/HADOOP-15871 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Reporter: Shixiong Zhu >Priority: Major > > E.g, DFSInputStream and S3AInputStream return the size of the remaining > available bytes, but the javadoc of "available" says it should "Returns an > estimate of the number of bytes that can be read (or skipped over) from this > input stream *without blocking* by the next invocation of a method for this > input stream." > I understand that some applications may rely on the current behavior. It > would be great that there is an interface to document how "available" should > be implemented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract
[ https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-15871. - Resolution: Won't Fix > Some input streams does not obey "java.io.InputStream.available" contract > -- > > Key: HADOOP-15871 > URL: https://issues.apache.org/jira/browse/HADOOP-15871 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Reporter: Shixiong Zhu >Priority: Major > > E.g, DFSInputStream and S3AInputStream return the size of the remaining > available bytes, but the javadoc of "available" says it should "Returns an > estimate of the number of bytes that can be read (or skipped over) from this > input stream *without blocking* by the next invocation of a method for this > input stream." > I understand that some applications may rely on the current behavior. It > would be great that there is an interface to document how "available" should > be implemented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer closed pull request #1521: HDDS-2073. Make SCMSecurityProtocol message based
anuengineer closed pull request #1521: HDDS-2073. Make SCMSecurityProtocol message based URL: https://github.com/apache/hadoop/pull/1521 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1521: HDDS-2073. Make SCMSecurityProtocol message based
anuengineer commented on issue #1521: HDDS-2073. Make SCMSecurityProtocol message based URL: https://github.com/apache/hadoop/pull/1521#issuecomment-537644806 @arp7 Thanks for the review. I have rebased and added the comment that Arpit wanted while committing this patch. @elek Thank you very much for the contribution. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG.
anuengineer commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG. URL: https://github.com/apache/hadoop/pull/1572#issuecomment-537642474 The build failure does not look like code issue. More like an infra problem. `[INFO] --- exec-maven-plugin:1.3.1:exec (dist) @ hadoop-ozone-dist --- /workdir/hadoop-ozone/dev-support/checks/build.sh: line 20:44 Killed mvn -B -f pom.ozone.xml -Dmaven.javadoc.skip=true -DskipTests clean install ` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r330727807 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { Review comment: Yup, they are marked as `deprecated`, should be removed once we move completely to new APIs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r330725498 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { + writeLock(resource); + } + + /** + * Releases the lock on given resource. + * + * @param resource for which the lock has to be released + * @deprecated Use {@link LockManager#writeUnlock} instead + */ + public void unlock(final R resource) { + writeUnlock(resource); + } + + /** + * Acquires the read lock on given resource. + * + * Acquires the read lock on resource if the write lock is not held by + * another thread and returns immediately. + * + * If the write lock on resource is held by another thread then + * the current thread becomes disabled for thread scheduling + * purposes and lies dormant until the read lock has been acquired. + * + * @param resource on which the read lock has to be acquired + */ + public void readLock(final R resource) { +acquire(resource, ActiveLock::readLock); + } + + /** + * Releases the read lock on given resource. + * + * @param resource for which the read lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void readUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::readUnlock); + } + + /** + * Acquires the write lock on given resource. + * + * Acquires the write lock on resource if neither the read nor write lock + * are held by another thread and returns immediately. + * + * If the current thread already holds the write lock then the + * hold count is incremented by one and the method returns + * immediately. + * + * If the lock is held by another thread then the current + * thread becomes disabled for thread scheduling purposes and + * lies dormant until the write lock has been acquired. + * + * @param resource on which the lock has to be acquired */ - public void lock(T resource) { -activeLocks.compute(resource, (k, v) -> { - ActiveLock lock; + public void writeLock(final R resource) { +acquire(resource, ActiveLock::writeLock); + } + + /** + * Releases the write lock on given resource. + * + * @param resource for which the lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void writeUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::writeUnlock); + } + + /** + * Acquires the lock on given resource using the provided lock function. + * + * @param resource on which the lock has to be acquired + * @param lockFn function to acquire the lock + */ + private void acquire(final R resource, final Consumer lockFn) { +lockFn.accept(getLockForLocking(resource)); + } + + /** + * Releases the lock on given resource using the provided release function. + * + * @param resource for which the lock has to be released + * @param releaseFn function to release the lock + */ + private void release(final R resource, final
[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager.
nandakumar131 commented on a change in pull request #1564: HDDS-2223. Support ReadWrite lock in LockManager. URL: https://github.com/apache/hadoop/pull/1564#discussion_r330724170 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lock/LockManager.java ## @@ -25,42 +25,146 @@ import java.util.Map; import java.util.concurrent.ConcurrentHashMap; +import java.util.function.Consumer; /** * Manages the locks on a given resource. A new lock is created for each * and every unique resource. Uniqueness of resource depends on the * {@code equals} implementation of it. */ -public class LockManager { +public class LockManager { private static final Logger LOG = LoggerFactory.getLogger(LockManager.class); - private final Map activeLocks = new ConcurrentHashMap<>(); + private final Map activeLocks = new ConcurrentHashMap<>(); private final GenericObjectPool lockPool = new GenericObjectPool<>(new PooledLockFactory()); /** - * Creates new LockManager instance. + * Creates new LockManager instance with the given Configuration. * * @param conf Configuration object */ - public LockManager(Configuration conf) { -int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, + public LockManager(final Configuration conf) { +final int maxPoolSize = conf.getInt(HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY, HddsConfigKeys.HDDS_LOCK_MAX_CONCURRENCY_DEFAULT); lockPool.setMaxTotal(maxPoolSize); } - /** * Acquires the lock on given resource. * * If the lock is not available then the current thread becomes * disabled for thread scheduling purposes and lies dormant until the * lock has been acquired. + * + * @param resource on which the lock has to be acquired + * @deprecated Use {@link LockManager#writeLock} instead + */ + public void lock(final R resource) { + writeLock(resource); + } + + /** + * Releases the lock on given resource. + * + * @param resource for which the lock has to be released + * @deprecated Use {@link LockManager#writeUnlock} instead + */ + public void unlock(final R resource) { + writeUnlock(resource); + } + + /** + * Acquires the read lock on given resource. + * + * Acquires the read lock on resource if the write lock is not held by + * another thread and returns immediately. + * + * If the write lock on resource is held by another thread then + * the current thread becomes disabled for thread scheduling + * purposes and lies dormant until the read lock has been acquired. + * + * @param resource on which the read lock has to be acquired + */ + public void readLock(final R resource) { +acquire(resource, ActiveLock::readLock); + } + + /** + * Releases the read lock on given resource. + * + * @param resource for which the read lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void readUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::readUnlock); + } + + /** + * Acquires the write lock on given resource. + * + * Acquires the write lock on resource if neither the read nor write lock + * are held by another thread and returns immediately. + * + * If the current thread already holds the write lock then the + * hold count is incremented by one and the method returns + * immediately. + * + * If the lock is held by another thread then the current + * thread becomes disabled for thread scheduling purposes and + * lies dormant until the write lock has been acquired. + * + * @param resource on which the lock has to be acquired */ - public void lock(T resource) { -activeLocks.compute(resource, (k, v) -> { - ActiveLock lock; + public void writeLock(final R resource) { +acquire(resource, ActiveLock::writeLock); + } + + /** + * Releases the write lock on given resource. + * + * @param resource for which the lock has to be released + * @throws IllegalMonitorStateException if the current thread does not + * hold this lock + */ + public void writeUnlock(final R resource) throws IllegalMonitorStateException { +release(resource, ActiveLock::writeUnlock); + } + + /** + * Acquires the lock on given resource using the provided lock function. + * + * @param resource on which the lock has to be acquired + * @param lockFn function to acquire the lock + */ + private void acquire(final R resource, final Consumer lockFn) { +lockFn.accept(getLockForLocking(resource)); Review comment: While acquiring the lock if we don't increment the active count as part of getLock call (atomically) we will end up in inconsistent state. Let's say we got the lock and didn't increment the count, we try to acquire the lock and some other
[GitHub] [hadoop] hadoop-yetus commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG.
hadoop-yetus commented on issue #1572: HDDS-2226. S3 Secrets should use a strong RNG. URL: https://github.com/apache/hadoop/pull/1572#issuecomment-537636545 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 38 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 45 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 43 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 59 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 859 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 25 | hadoop-hdds in trunk failed. | | -1 | javadoc | 20 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 976 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 46 | hadoop-hdds in trunk failed. | | -1 | findbugs | 22 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. | | -1 | compile | 25 | hadoop-hdds in the patch failed. | | -1 | compile | 20 | hadoop-ozone in the patch failed. | | -1 | javac | 25 | hadoop-hdds in the patch failed. | | -1 | javac | 20 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 58 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 770 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 23 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 33 | hadoop-hdds in the patch failed. | | -1 | findbugs | 21 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 25 | hadoop-ozone in the patch failed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 2458 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1572 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a672f85dc6a7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 0d2d6f9 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1572/2/artifact/out/patch-compile-hadoop-hdds.txt | | javac |
[jira] [Resolved] (HADOOP-15091) S3aUtils.getEncryptionAlgorithm() always logs@Debug "Using SSE-C"
[ https://issues.apache.org/jira/browse/HADOOP-15091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-15091. - Resolution: Duplicate > S3aUtils.getEncryptionAlgorithm() always logs@Debug "Using SSE-C" > - > > Key: HADOOP-15091 > URL: https://issues.apache.org/jira/browse/HADOOP-15091 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Priority: Trivial > > even when you have encryption off or set to sse-kms/aes256, the debug logs > print a comment about using SSE-C > {code} > 2017-12-05 12:44:33,292 [main] DEBUG s3a.S3AUtils > (S3AUtils.java:getEncryptionAlgorithm(1097)) - Using SSE-C with null key > {code} > That log statement should be moved to only get printed with SSE-C enabled. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org