[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657711#comment-16657711 ] Íñigo Goiri commented on HADOOP-15821: -- Should we do an addendum or a new JIRA? > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657671#comment-16657671 ] Eric Yang commented on HADOOP-15821: [~elgoiri] Yes, it needs the relativePath tag for correctness. > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657643#comment-16657643 ] Íñigo Goiri commented on HADOOP-15821: -- I built trunk run on a clean Linux docker image and I got: {code} 2018-10-20T01:15:19.8797956Z [ERROR] [ERROR] Some problems were encountered while processing the POMs: 2018-10-20T01:15:19.8799644Z [WARNING] 'parent.relativePath' of POM org.apache.hadoop:hadoop-registry:3.3.0-SNAPSHOT (/hadoop/hadoop-common-project/hadoop-registry/pom.xml) points at org.apache.hadoop:hadoop-common-project instead of org.apache.hadoop:hadoop-project, please verify your project structure @ line 19, column 11 2018-10-20T01:15:19.8801632Z [FATAL] Non-resolvable parent POM for org.apache.hadoop:hadoop-registry:3.3.0-SNAPSHOT: Could not find artifact org.apache.hadoop:hadoop-project:pom:3.3.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 19, column 11 2018-10-20T01:15:19.8801849Z @ 2018-10-20T01:15:19.8803901Z [ERROR] The build could not read 1 project -> [Help 1] 2018-10-20T01:15:19.8806798Z [ERROR] 2018-10-20T01:15:19.8807806Z [ERROR] The project org.apache.hadoop:hadoop-registry:3.3.0-SNAPSHOT (/hadoop/hadoop-common-project/hadoop-registry/pom.xml) has 1 error 2018-10-20T01:15:19.8809208Z [ERROR] Non-resolvable parent POM for org.apache.hadoop:hadoop-registry:3.3.0-SNAPSHOT: Could not find artifact org.apache.hadoop:hadoop-project:pom:3.3.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 19, column 11 -> [Help 2] 2018-10-20T01:15:19.8809442Z [ERROR] 2018-10-20T01:15:19.8809831Z [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 2018-10-20T01:15:19.8810129Z [ERROR] Re-run Maven using the -X switch to enable full debug logging. 2018-10-20T01:15:19.8810193Z [ERROR] 2018-10-20T01:15:19.8810271Z [ERROR] For more information about the errors and possible solutions, please read the following articles: 2018-10-20T01:15:19.8810360Z [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException 2018-10-20T01:15:19.8810461Z [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException {code} I'm not able to reproduce in my box and Yetus seemed fine with it. Checking the pom, it looks like it may need {{relativePath}}: {code} hadoop-project org.apache.hadoop 3.3.0-SNAPSHOT ../../hadoop-project {code} Anybody else seeing this? > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15866) HADOOP-15523 breaks compatibility
[ https://issues.apache.org/jira/browse/HADOOP-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657622#comment-16657622 ] Hadoop QA commented on HADOOP-15866: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 110 unchanged - 0 fixed = 112 total (was 110) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 59s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestRPC | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15866 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944815/HADOOP-15866.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5eaf667eab82 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 00254d7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15400/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/15400/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results |
[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657609#comment-16657609 ] Hudson commented on HADOOP-15821: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15277 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15277/]) HADOOP-15821. Move YARN Registry to Hadoop Registry. (eyang: rev e2a9fa8448e2aac34c318260e425786a6c8ca2ae) * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java * (add) hadoop-assemblies/src/main/resources/assemblies/hadoop-registry-dist.xml * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryTypeUtils.java * (add) hadoop-common-project/hadoop-registry/pom.xml * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/SecureableZone.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoChildrenForEphemeralsException.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/ContainerServiceRecordProcessor.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperations.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperationsFactory.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/RegistryIOException.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZKPathDumper.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryBindingSource.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNSServer.java * (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/types/AddressTypes.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/SelectByYarnPersistence.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/ServiceRecordProcessor.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/CuratorService.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/ZoneSelector.java * (add) hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/client/binding/TestMarshalling.java * (add) hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRegistry.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/ApplicationServiceRecordProcessor.java * (add) hadoop-common-project/hadoop-common/src/site/markdown/registry/index.md * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestSecureRegistryDNS.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/resources/test.private * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/PathListener.java * (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestMicroZookeeperService.java * (add) hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/services/RegistryAdminService.java * (edit) dev-support/bin/dist-layout-stitching * (add) hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java * (add) hadoop-common-project/hadoop-common/src/site/markdown/registry/hadoop-registry.md * (edit) hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh * (add)
[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657598#comment-16657598 ] Íñigo Goiri commented on HADOOP-15821: -- Thanks [~ste...@apache.org], [~billie.rinaldi], and [~eyang] for pushing this. I'll start working on leveraging this on HDFS. > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657592#comment-16657592 ] Takanobu Asanuma commented on HADOOP-15832: --- Seems some unit tests still fail. Please see YARN-8919. > Upgrade BouncyCastle to 1.60 > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657581#comment-16657581 ] Eric Yang commented on HADOOP-15821: +1 [~elgoiri] Thank you for the patch. Patch 9 works as intended. Unit test failures doesn't appear to be related to this patch. I will commit to this to trunk shortly. > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-15821: --- Target Version/s: 3.3.0 Fix Version/s: 3.3.0 > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry
[ https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-15821: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Move Hadoop YARN Registry to Hadoop Registry > > > Key: HADOOP-15821 > URL: https://issues.apache.org/jira/browse/HADOOP-15821 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, > HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, > HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, > HADOOP-15821.008.patch, HADOOP-15821.009.patch > > > Currently, Hadoop YARN Registry is in YARN. However, this can be used by > other parts of the project (e.g., HDFS). In addition, it does not have any > real dependency to YARN. > We should move it into commons and make it Hadoop Registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15850: Priority: Critical (was: Major) > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Critical > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15866) HADOOP-15523 breaks compatibility
[ https://issues.apache.org/jira/browse/HADOOP-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15866: - Attachment: HADOOP-15866.001.patch > HADOOP-15523 breaks compatibility > - > > Key: HADOOP-15866 > URL: https://issues.apache.org/jira/browse/HADOOP-15866 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4, 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Blocker > Attachments: HADOOP-15866.001.patch > > > Our internal tool found HADOOP-15523 breaks public API compatibility: > class CommonConfigurationKeysPublic > || ||Change||Effect|| > |1|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS has been renamed to > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY.|Recompilation of a client > program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS in > CommonConfigurationKeysPublic.| > |2|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT has been > renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT.|Recompilation > of a client program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT in > CommonConfigurationKeysPublic.| > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT is used to > instantiate a variable in ShellBasedGroupsMapping objects, and since almost > all applications requires groups mapping, this can cause runtime error if > application loads multiple versions of Hadoop library. > IMO this is a blocker for 3.2.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15866) HADOOP-15523 breaks compatibility
[ https://issues.apache.org/jira/browse/HADOOP-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15866: - Status: Patch Available (was: Open) > HADOOP-15523 breaks compatibility > - > > Key: HADOOP-15866 > URL: https://issues.apache.org/jira/browse/HADOOP-15866 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.1, 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Blocker > Attachments: HADOOP-15866.001.patch > > > Our internal tool found HADOOP-15523 breaks public API compatibility: > class CommonConfigurationKeysPublic > || ||Change||Effect|| > |1|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS has been renamed to > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY.|Recompilation of a client > program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS in > CommonConfigurationKeysPublic.| > |2|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT has been > renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT.|Recompilation > of a client program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT in > CommonConfigurationKeysPublic.| > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT is used to > instantiate a variable in ShellBasedGroupsMapping objects, and since almost > all applications requires groups mapping, this can cause runtime error if > application loads multiple versions of Hadoop library. > IMO this is a blocker for 3.2.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15866) HADOOP-15523 breaks compatibility
[ https://issues.apache.org/jira/browse/HADOOP-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-15866: Assignee: Wei-Chiu Chuang > HADOOP-15523 breaks compatibility > - > > Key: HADOOP-15866 > URL: https://issues.apache.org/jira/browse/HADOOP-15866 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4, 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Blocker > > Our internal tool found HADOOP-15523 breaks public API compatibility: > class CommonConfigurationKeysPublic > || ||Change||Effect|| > |1|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS has been renamed to > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY.|Recompilation of a client > program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS in > CommonConfigurationKeysPublic.| > |2|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT has been > renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT.|Recompilation > of a client program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT in > CommonConfigurationKeysPublic.| > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT is used to > instantiate a variable in ShellBasedGroupsMapping objects, and since almost > all applications requires groups mapping, this can cause runtime error if > application loads multiple versions of Hadoop library. > IMO this is a blocker for 3.2.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15866) HADOOP-15523 breaks compatibility
Wei-Chiu Chuang created HADOOP-15866: Summary: HADOOP-15523 breaks compatibility Key: HADOOP-15866 URL: https://issues.apache.org/jira/browse/HADOOP-15866 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.1.1, 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.3.0 Reporter: Wei-Chiu Chuang Our internal tool found HADOOP-15523 breaks public API compatibility: class CommonConfigurationKeysPublic || ||Change||Effect|| |1|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS has been renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY.|Recompilation of a client program may be terminated with the message: cannot find variable HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS in CommonConfigurationKeysPublic.| |2|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT has been renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT.|Recompilation of a client program may be terminated with the message: cannot find variable HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT in CommonConfigurationKeysPublic.| HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT is used to instantiate a variable in ShellBasedGroupsMapping objects, and since almost all applications requires groups mapping, this can cause runtime error if application loads multiple versions of Hadoop library. IMO this is a blocker for 3.2.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657463#comment-16657463 ] Hudson commented on HADOOP-15836: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15276 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15276/]) HADOOP-15836. Review of AccessControlList. Contributed by BELUGA BEHR. (inigoiri: rev 00254d7b8c714ae2000d0934d260b23458033529) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java > Review of AccessControlList > --- > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657437#comment-16657437 ] Hudson commented on HADOOP-15850: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15275 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15275/]) HADOOP-15850. CopyCommitter#concatFileChunks should check that the (weichiu: rev e2cecb681e2aab8b7c5465719cac53dce407a64c) * (edit) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657436#comment-16657436 ] Sean Busbey commented on HADOOP-15850: -- This seems more severe than "Major". Am I correct that this impacts downstream users of DistCp beyond HBase? > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657429#comment-16657429 ] Wei-Chiu Chuang commented on HADOOP-15850: -- Made a trivial change to the cherry-pick for branch-3.0. Pushed the commit and posted the patch for posterity. > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15836) Review of AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15836: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) > Review of AccessControlList > --- > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15850: - Fix Version/s: 3.0.4 > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15836) Review of AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15836: - Summary: Review of AccessControlList (was: Review of AccessControlList.java) > Review of AccessControlList > --- > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15850: - Attachment: HADOOP-15850.branch-3.0.patch > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.branch-3.0.patch, HADOOP-15850.v2.patch, > HADOOP-15850.v3.patch, HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > HADOOP-15850.v6.patch, testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657407#comment-16657407 ] Íñigo Goiri commented on HADOOP-15836: -- +1 on [^HADOOP-15836.1.patch]. > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657389#comment-16657389 ] Wei-Chiu Chuang commented on HADOOP-15850: -- The branch-3.0 backport doesn't compile. Reverted for now. {quote} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-distcp: Compilation failure [ERROR] /Users/weichiu/sandbox/upstream/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java:[85,38] incompatible types: int cannot be converted to java.lang.Throwable {quote} > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, HADOOP-15850.v6.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15850: - Fix Version/s: (was: 3.0.4) > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, HADOOP-15850.v6.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15850: - Resolution: Fixed Fix Version/s: 3.2.1 3.1.2 3.3.0 3.0.4 Status: Resolved (was: Patch Available) Pushed v6 patch to trunk, branch-3.2, branch-3.1, branch-3.0. Thanks [~yuzhih...@gmail.com] for the patch and [~ste...@apache.org] for the review > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, HADOOP-15850.v6.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657350#comment-16657350 ] BELUGA BEHR commented on HADOOP-15836: -- I though about this previously. This information can be read from a config file and loaded into a {{HashSet}} (as is currently implemented). If the software needed to spit out the user/group names back to the screen, or back to a file, the ordering of the output would almost certainly not be the same as the input (just depends on how the stuff is ordered in the {{HashMap}}). So as things stand, the software will most likely not build the output in the same order as the input. Therefore, it appears that there is no constraints on how the stuff is ordered. If we care about the output matching the input exactly, that should be a new ticket. Since the order doesn't matter, I think it might be more pleasant to a human operator if the list is ordered alphabetically. > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12640) Code Review AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657333#comment-16657333 ] Hadoop QA commented on HADOOP-12640: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HADOOP-12640 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12640 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12777806/AccessControlList.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15399/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Code Review AccessControlList > - > > Key: HADOOP-12640 > URL: https://issues.apache.org/jira/browse/HADOOP-12640 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: AccessControlList.patch, AccessControlList.patch > > > After some confusion of my own, in particular with > "mapreduce.job.acl-view-job," I have looked over the AccessControlList > implementation and cleaned it up and clarified a few points. > 1) I added tests to demonstrate the existing behavior of including an > asterisk in either the username or the group field, it overrides everything > and allows all access. > "user1,user2,user3 *" = all access > "* group1,group2" = all access > "* *" = all access > "* " = all access > " *" = all access > 2) General clean-up and simplification -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657332#comment-16657332 ] Íñigo Goiri commented on HADOOP-15836: -- I've seen dependencies on the order of the hashes in unit tests before; not fun to find. I think that switching to an ordered tree is reasonable for this purpose. Another option would be to keep the HashSet and making the unit test not check for a particular order. Up to you. > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15760) Include Apache Commons Collections4
[ https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657319#comment-16657319 ] BELUGA BEHR commented on HADOOP-15760: -- Slow and steady. > Include Apache Commons Collections4 > --- > > Key: HADOOP-15760 > URL: https://issues.apache.org/jira/browse/HADOOP-15760 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.10.0, 3.0.3 >Reporter: BELUGA BEHR >Priority: Major > Attachments: HADOOP-15760.1.patch > > > Please allow for use of Apache Commons Collections 4 library with the end > goal of migrating from Apache Commons Collections 3. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-12640) Code Review AccessControlList
[ https://issues.apache.org/jira/browse/HADOOP-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR reassigned HADOOP-12640: Assignee: BELUGA BEHR > Code Review AccessControlList > - > > Key: HADOOP-12640 > URL: https://issues.apache.org/jira/browse/HADOOP-12640 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: AccessControlList.patch, AccessControlList.patch > > > After some confusion of my own, in particular with > "mapreduce.job.acl-view-job," I have looked over the AccessControlList > implementation and cleaned it up and clarified a few points. > 1) I added tests to demonstrate the existing behavior of including an > asterisk in either the username or the group field, it overrides everything > and allows all access. > "user1,user2,user3 *" = all access > "* group1,group2" = all access > "* *" = all access > "* " = all access > " *" = all access > 2) General clean-up and simplification -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657309#comment-16657309 ] BELUGA BEHR commented on HADOOP-15836: -- ... the {{TreeSet}} enforces the user and group names to be ordered alphabetically, I also changed the unit tests to verify that they are arranged alphabetically. > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657307#comment-16657307 ] BELUGA BEHR commented on HADOOP-15836: -- [~elgoiri] Good eyes. All things equal, I'd prefer a {{HashSet}}, however, the unit tests are written in such a way that they care about the order of the user and group names. The unit tests are written in such a way that they just so happen to pass given today's implementation of a {{HashSet}} in the JDK. However, these assumptions on order makes the tests very brittle and would possibly fail with a JDK upgrade in which the order was changed based on a change in the HashSet implementation. Since the unit tests are expecting a certain order to be enforced, I used a {{TreeSet}} which keeps the keys in order explicitly, no matter future changes to the JDK. > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657150#comment-16657150 ] Íñigo Goiri commented on HADOOP-15836: -- These changes are easier to check than the previous ones :) My only question is the change between HashSet and TreeSet for the groups. I always thought TreeSets made more sense for Strings but never checked carefully. Do you have any reference? > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657075#comment-16657075 ] Wangda Tan commented on HADOOP-15832: - Thanks [~rkanter]! > Upgrade BouncyCastle to 1.60 > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer
[ https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657047#comment-16657047 ] Wei-Chiu Chuang commented on HADOOP-15822: -- Thanks [~pbacsko] please file a new jira for the AIOOBE you found > zstd compressor can fail with a small output buffer > --- > > Key: HADOOP-15822 > URL: https://issues.apache.org/jira/browse/HADOOP-15822 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Major > Attachments: HADOOP-15822.001.patch, HADOOP-15822.002.patch > > > TestZStandardCompressorDecompressor fails a couple of tests on my machine > with the latest zstd library (1.3.5). Compression can fail to successfully > finalize the stream when a small output buffer is used resulting in a failed > to init error, and decompression with a direct buffer can fail with an > invalid src size error. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656970#comment-16656970 ] Wei-Chiu Chuang commented on HADOOP-15850: -- +1 > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, HADOOP-15850.v6.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656940#comment-16656940 ] Hadoop QA commented on HADOOP-15850: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 2s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15850 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944725/HADOOP-15850.v6.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 88e430ae2252 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9bd1832 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15398/testReport/ | | Max. process+thread count | 335 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15398/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > >
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656912#comment-16656912 ] BELUGA BEHR commented on HADOOP-15836: -- [~elgoiri] You see what happens when you are too kind? :) Can you please take a look at this one also? > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15850: Attachment: HADOOP-15850.v6.patch > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, HADOOP-15850.v6.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656812#comment-16656812 ] Hudson commented on HADOOP-15804: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15267 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15267/]) HADOOP-15804. upgrade to commons-compress 1.18. Contributed by Akira (tasanuma: rev 9bd18324c7801472409d9ad69ea365aa7a33a9c4) * (edit) hadoop-project/pom.xml > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656799#comment-16656799 ] Hadoop QA commented on HADOOP-15842: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15842 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944707/HADOOP-15842-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 1cb6f77d6ff7 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285d2c0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15397/testReport/ | | Max. process+thread count | 1435 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656790#comment-16656790 ] Hadoop QA commented on HADOOP-15855: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 13 unchanged - 3 fixed = 13 total (was 16) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 24s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15855 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944704/HADOOP-15855-002.patch | | Optional Tests | dupname asflicense mvnsite compile javac javadoc mvninstall unit shadedclient findbugs checkstyle | | uname | Linux 1cdd14162243 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285d2c0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/15395/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15395/testReport/ | | Max. process+thread count | 1351 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Updated] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-15804: -- Fix Version/s: 3.1.2 3.0.4 3.2.1 > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656745#comment-16656745 ] Takanobu Asanuma edited comment on HADOOP-15804 at 10/19/18 12:44 PM: -- Committed to trunk, branch-3.2, branch-3.1 and branch-3.0. Thanks [~ajisakaa] for the patch, [~pj.fanning] for reporting the issue and [~jojochuang] for the comment! was (Author: tasanuma0829): Committed to trunk. Thanks [~ajisakaa] for the patch, [~pj.fanning] for reporting the issue and [~jojochuang] for the comment! > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-15804: -- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656745#comment-16656745 ] Takanobu Asanuma commented on HADOOP-15804: --- Committed to trunk. Thanks [~ajisakaa] for the patch, [~pj.fanning] for reporting the issue and [~jojochuang] for the comment! > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15804) upgrade to commons-compress 1.18
[ https://issues.apache.org/jira/browse/HADOOP-15804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656733#comment-16656733 ] Takanobu Asanuma commented on HADOOP-15804: --- I've confirmed that the patch doesn't affect existing unit tests. +1. > upgrade to commons-compress 1.18 > > > Key: HADOOP-15804 > URL: https://issues.apache.org/jira/browse/HADOOP-15804 > Project: Hadoop Common > Issue Type: Improvement >Reporter: PJ Fanning >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-15804.01.patch > > > [https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt] > Some CVEs have been fixed in recent releases > ([https://commons.apache.org/proper/commons-compress/security-reports.html]) > [https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1] > depends on commons-compress 1.4.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK
[ https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656699#comment-16656699 ] Hadoop QA commented on HADOOP-13887: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-13887 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-13887 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15396/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Encrypt S3A data client-side with AWS SDK > - > > Key: HADOOP-13887 > URL: https://issues.apache.org/jira/browse/HADOOP-13887 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Jeeyoung Kim >Assignee: Igor Mazur >Priority: Minor > Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, > HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, > HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, > HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, > HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, > HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, > HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf > > > Expose the client-side encryption option documented in Amazon S3 > documentation - > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html > Currently this is not exposed in Hadoop but it is exposed as an option in AWS > Java SDK, which Hadoop currently includes. It should be trivial to propagate > this as a parameter passed to the S3client used in S3AFileSystem.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656660#comment-16656660 ] Hadoop QA commented on HADOOP-15864: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 58s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 27s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15864 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944675/HADOOP-15864-branch.2.7.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b50a3af5d3fe 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285d2c0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15394/testReport/ | | Max. process+thread count | 1442 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15394/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Job submitter /
[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656643#comment-16656643 ] Oleksandr Shevchenko commented on HADOOP-15865: --- Thanks [~lqjacklee] for your comment. Yes, HADOOP-15418 has the same root cause of the problem. Using not thread-safe iterators. The different is HADOOP-15418 fix the problem related to using iterator of Configuration object instead of the iterator of Properties object. When we iterate across Properties ConcurrentModificationException can't be thrown since we work with the local variable. In the case described in this ticket, we use iterator of Properties object which is the parameter of overlay() method. As the result, we can get ConcurrentModificationException. > ConcurrentModificationException in Configuration.overlay() method > - > > Key: HADOOP-15865 > URL: https://issues.apache.org/jira/browse/HADOOP-15865 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Major > Attachments: HADOOP-15865.001.patch > > > Configuration.overlay() is not thread-safe and can be the cause of > ConcurrentModificationException since we use iteration over Properties > object. > {code} > private void overlay(Properties to, Properties from) { > for (Entry entry: from.entrySet()) { > to.put(entry.getKey(), entry.getValue()); > } > } > {code} > Properties class is thread-safe but iterator is not. We should manually > synchronize on the returned set of entries which we use for iteration. > We faced with ResourceManger fails during recovery caused by > ConcurrentModificationException: > {noformat} > 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: > Service ResourceManager failed in state STARTED; cause: > java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) > at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) > at > org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) > at > org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) > at > org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) > at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: > removing RMDelegation token with sequence number: 3489914 > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing > RMDelegationToken and SequenceNumber > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: > Removing RMDelegationToken_3489914 > 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on > 8032 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15619) Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features
[ https://issues.apache.org/jira/browse/HADOOP-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-15619: --- Assignee: Steve Loughran > Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features > > > Key: HADOOP-15619 > URL: https://issues.apache.org/jira/browse/HADOOP-15619 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Features for S3Guard for Hadoop 3.3. Goal: take the experimental tag off -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK
[ https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656624#comment-16656624 ] Steve Loughran commented on HADOOP-13887: - I'm looking at this in the context of HADOOP-14556 adding the ability to serialize secrets over the wire inside a DT. I don't want to make the change there cutting out the option to add CSE I'm going to * add the CSE options to the enum sent around * have the marshall/unmarshall code store a version ID so that if we need to add a new field, any changes to the writable will be detected fast. > Encrypt S3A data client-side with AWS SDK > - > > Key: HADOOP-13887 > URL: https://issues.apache.org/jira/browse/HADOOP-13887 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Jeeyoung Kim >Assignee: Igor Mazur >Priority: Minor > Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, > HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, > HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, > HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, > HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, > HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, > HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf > > > Expose the client-side encryption option documented in Amazon S3 > documentation - > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html > Currently this is not exposed in Hadoop but it is exposed as an option in AWS > Java SDK, which Hadoop currently includes. It should be trivial to propagate > this as a parameter passed to the S3client used in S3AFileSystem.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
[ https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656620#comment-16656620 ] Steve Loughran commented on HADOOP-15850: - Looking good; one little nit from checkstyle {code} ./hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java:105: concatFileChunks(conf);: 'if' child has incorrect indentation level 8, expected level should be 6. [Indentation] {code} > CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 > -- > > Key: HADOOP-15850 > URL: https://issues.apache.org/jira/browse/HADOOP-15850 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.1 >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Major > Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, > HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, > testIncrementalBackupWithBulkLoad-output.txt > > > I was investigating test failure of TestIncrementalBackupWithBulkLoad from > hbase against hadoop 3.1.1 > hbase MapReduceBackupCopyJob$BackupDistCp would create listing file: > {code} > LOG.debug("creating input listing " + listing + " , totalRecords=" + > totalRecords); > cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing); > cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, > totalRecords); > {code} > For the test case, two bulk loaded hfiles are in the listing: > {code} > 2018-10-13 14:09:24,123 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : > hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > 2018-10-13 14:09:24,125 DEBUG [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for > 2 files of 10242 > {code} > Later on, CopyCommitter#concatFileChunks would throw the following exception: > {code} > 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590): > job_local1795473782_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/ > > 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e- > > 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > {code} > The above warning shouldn't happen - the two bulk loaded hfiles are > independent. > From the contents of the two CopyListingFileStatus instances, we can see that > their isSplit() return false. Otherwise the following from toString should be > logged: > {code} > if (isSplit()) { > sb.append(", chunkOffset = ").append(this.getChunkOffset()); > sb.append(", chunkLength = ").append(this.getChunkLength()); > } > {code} > From hbase side, we can specify one bulk loaded hfile per job but that > defeats the purpose of using DistCp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15842: Attachment: HADOOP-15842-002.patch > add fs.azure.account.oauth2.client.secret to > hadoop.security.sensitive-config-keys > -- > > Key: HADOOP-15842 > URL: https://issues.apache.org/jira/browse/HADOOP-15842 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15842-001.patch, HADOOP-15842-002.patch > > > in HADOOP-15839 I left out "fs.azure.account.oauth2.client.secret". Fix by > adding it -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15842: Status: Patch Available (was: Open) Patch 002: *Thomas's feedback * added the same settings to CommonConfigurationKeysPublic, as I noticed there was a default value there too. PITA to keep them in sync, but from a due diligence perspective, needed > add fs.azure.account.oauth2.client.secret to > hadoop.security.sensitive-config-keys > -- > > Key: HADOOP-15842 > URL: https://issues.apache.org/jira/browse/HADOOP-15842 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15842-001.patch, HADOOP-15842-002.patch > > > in HADOOP-15839 I left out "fs.azure.account.oauth2.client.secret". Fix by > adding it -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15842) add fs.azure.account.oauth2.client.secret to hadoop.security.sensitive-config-keys
[ https://issues.apache.org/jira/browse/HADOOP-15842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15842: Status: Open (was: Patch Available) > add fs.azure.account.oauth2.client.secret to > hadoop.security.sensitive-config-keys > -- > > Key: HADOOP-15842 > URL: https://issues.apache.org/jira/browse/HADOOP-15842 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15842-001.patch > > > in HADOOP-15839 I left out "fs.azure.account.oauth2.client.secret". Fix by > adding it -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656616#comment-16656616 ] Hadoop QA commented on HADOOP-15865: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 4s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}119m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15865 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12944672/HADOOP-15865.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 45adfeacf878 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 285d2c0 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15393/testReport/ | | Max. process+thread count | 1446 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15393/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. >
[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15855: Attachment: HADOOP-15855-002.patch > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15855: Status: Patch Available (was: Open) > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15855: Status: Open (was: Patch Available) > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656608#comment-16656608 ] Steve Loughran commented on HADOOP-15855: - patch 002: apply the text changes; didn't do the java PKI too review as that's a longer piece of work which someone who knows what to do will have to perform. > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656605#comment-16656605 ] Steve Loughran commented on HADOOP-15855: - bq. It is also limited to PKI keypairs. will delegate that JVM verification to others, I'm afraid > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656469#comment-16656469 ] lqjacklee edited comment on HADOOP-15418 at 10/19/18 9:07 AM: -- [~suma.shivaprasad] [~jojochuang] Does Test case can regenerate the issue reported and described ? was (Author: lqjacklee): [~suma.shivaprasad] [~jojochuang] sorry update too late. > Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of > iterator to avoid ConcurrentModificationException > - > > Key: HADOOP-15418 > URL: https://issues.apache.org/jira/browse/HADOOP-15418 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch, > HADOOP-15418.3.patch > > > The issue is similar to what was fixed in HADOOP-15411. Fixing this in > KMSAuthenticationFilter as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656480#comment-16656480 ] lqjacklee commented on HADOOP-15865: [~oshevchenko] Does the it similar with HADOOP-15418? > ConcurrentModificationException in Configuration.overlay() method > - > > Key: HADOOP-15865 > URL: https://issues.apache.org/jira/browse/HADOOP-15865 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Major > Attachments: HADOOP-15865.001.patch > > > Configuration.overlay() is not thread-safe and can be the cause of > ConcurrentModificationException since we use iteration over Properties > object. > {code} > private void overlay(Properties to, Properties from) { > for (Entry entry: from.entrySet()) { > to.put(entry.getKey(), entry.getValue()); > } > } > {code} > Properties class is thread-safe but iterator is not. We should manually > synchronize on the returned set of entries which we use for iteration. > We faced with ResourceManger fails during recovery caused by > ConcurrentModificationException: > {noformat} > 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: > Service ResourceManager failed in state STARTED; cause: > java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) > at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) > at > org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) > at > org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) > at > org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) > at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: > removing RMDelegation token with sequence number: 3489914 > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing > RMDelegationToken and SequenceNumber > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: > Removing RMDelegationToken_3489914 > 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on > 8032 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656469#comment-16656469 ] lqjacklee commented on HADOOP-15418: [~suma.shivaprasad] [~jojochuang] sorry update too late. > Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of > iterator to avoid ConcurrentModificationException > - > > Key: HADOOP-15418 > URL: https://issues.apache.org/jira/browse/HADOOP-15418 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch, > HADOOP-15418.3.patch > > > The issue is similar to what was fixed in HADOOP-15411. Fixing this in > KMSAuthenticationFilter as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656427#comment-16656427 ] He Xiaoqiao commented on HADOOP-15864: -- submit initial patch based branch-2.7, not throw exception and postpone failure if token do not update since domain name can not resolved. > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Critical > Attachments: HADOOP-15864-branch.2.7.001.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) > ... 35 more > Caused by: java.lang.reflect.InvocationTargetException > at
[jira] [Updated] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15864: - Attachment: HADOOP-15864-branch.2.7.001.patch Status: Patch Available (was: Open) > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Critical > Attachments: HADOOP-15864-branch.2.7.001.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) > at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) > ... 35 more > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) > at >
[jira] [Updated] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksandr Shevchenko updated HADOOP-15865: -- Attachment: HADOOP-15865.001.patch Status: Patch Available (was: Open) Could someone review the attached patch? Thanks! > ConcurrentModificationException in Configuration.overlay() method > - > > Key: HADOOP-15865 > URL: https://issues.apache.org/jira/browse/HADOOP-15865 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Major > Attachments: HADOOP-15865.001.patch > > > Configuration.overlay() is not thread-safe and can be the cause of > ConcurrentModificationException since we use iteration over Properties > object. > {code} > private void overlay(Properties to, Properties from) { > for (Entry entry: from.entrySet()) { > to.put(entry.getKey(), entry.getValue()); > } > } > {code} > Properties class is thread-safe but iterator is not. We should manually > synchronize on the returned set of entries which we use for iteration. > We faced with ResourceManger fails during recovery caused by > ConcurrentModificationException: > {noformat} > 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: > Service ResourceManager failed in state STARTED; cause: > java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) > at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) > at > org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) > at > org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) > at > org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) > at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: > removing RMDelegation token with sequence number: 3489914 > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing > RMDelegationToken and SequenceNumber > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: > Removing RMDelegationToken_3489914 > 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on > 8032 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
Oleksandr Shevchenko created HADOOP-15865: - Summary: ConcurrentModificationException in Configuration.overlay() method Key: HADOOP-15865 URL: https://issues.apache.org/jira/browse/HADOOP-15865 Project: Hadoop Common Issue Type: Bug Reporter: Oleksandr Shevchenko Assignee: Oleksandr Shevchenko Configuration.overlay() is not thread-safe and can be the cause of ConcurrentModificationException since we use iteration over Properties object. {code} private void overlay(Properties to, Properties from) { for (Entry entry: from.entrySet()) { to.put(entry.getKey(), entry.getValue()); } } {code} Properties class is thread-safe but iterator is not. We should manually synchronize on the returned set of entries which we use for iteration. We faced with ResourceManger fails during recovery caused by ConcurrentModificationException: {noformat} 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: Service ResourceManager failed in state STARTED; cause: java.util.ConcurrentModificationException java.util.ConcurrentModificationException at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) at org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) at org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) at org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) 2018-10-12 08:00:56,968 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: removing RMDelegation token with sequence number: 3489914 2018-10-12 08:00:56,968 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing RMDelegationToken and SequenceNumber 2018-10-12 08:00:56,968 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: Removing RMDelegationToken_3489914 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032 {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
He Xiaoqiao created HADOOP-15864: Summary: Job submitter / executor fail when SBN domain name can not resolved Key: HADOOP-15864 URL: https://issues.apache.org/jira/browse/HADOOP-15864 Project: Hadoop Common Issue Type: Bug Reporter: He Xiaoqiao Assignee: He Xiaoqiao Job submit failure and Task executes failure if Standby NameNode domain name can not resolved on HDFS HA with DelegationToken feature. This issue is triggered when create {{ConfiguredFailoverProxyProvider}} instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode with Security. Since in HDFS HA mode UGI need include separate token for each NameNode in order to dealing with Active-Standby switch, the double tokens' content is same of course. However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} it checks whether the address of NameNode has been resolved or not, if Not, throw #IllegalArgumentException upon, then job submitter/ task executor fail. HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets resolve completely. Another questions many guys consider is why NameNode domain name can not resolve? I think there are many scenarios, for instance node replace when meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure should not impact Hadoop cluster stability in my opinion. a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 {code:java} public static Text buildTokenService(InetSocketAddress addr) { String host = null; if (useIpForTokenService) { if (addr.isUnresolved()) { // host has no ip address throw new IllegalArgumentException( new UnknownHostException(addr.getHostName()) ); } host = addr.getAddress().getHostAddress(); } else { host = StringUtils.toLowerCase(addr.getHostName()); } return new Text(host + ":" + addr.getPort()); } {code} b.exception log ref: {code:xml} at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) at org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) at org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:176) at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:665) ... 35 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:498) ... 58 more Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: standbynamenode at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:390) at org.apache.hadoop.security.SecurityUtil.setTokenService(SecurityUtil.java:369) at