[jira] [Updated] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes
[ https://issues.apache.org/jira/browse/HADOOP-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15345: - Affects Version/s: 2.7.6 Status: Patch Available (was: Open) submit patch v1 and backport HADOOP-12185 to branch-2.7 > Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient > adding/getting/removing nodes > --- > > Key: HADOOP-15345 > URL: https://issues.apache.org/jira/browse/HADOOP-15345 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Attachments: HADOOP-15345-branch-2.7.001.patch > > > As per discussion in HADOOP-15343 backport HADOOP-12185 to branch-2.7 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes
[ https://issues.apache.org/jira/browse/HADOOP-15345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15345: - Attachment: HADOOP-15345-branch-2.7.001.patch > Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient > adding/getting/removing nodes > --- > > Key: HADOOP-15345 > URL: https://issues.apache.org/jira/browse/HADOOP-15345 > Project: Hadoop Common > Issue Type: Improvement >Reporter: He Xiaoqiao >Priority: Major > Attachments: HADOOP-15345-branch-2.7.001.patch > > > As per discussion in HADOOP-15343 backport HADOOP-12185 to branch-2.7 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15345) Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes
He Xiaoqiao created HADOOP-15345: Summary: Backport HADOOP-12185 to branch-2.7: NetworkTopology is not efficient adding/getting/removing nodes Key: HADOOP-15345 URL: https://issues.apache.org/jira/browse/HADOOP-15345 Project: Hadoop Common Issue Type: Improvement Reporter: He Xiaoqiao As per discussion in HADOOP-15343 backport HADOOP-12185 to branch-2.7 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries
[ https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415062#comment-16415062 ] genericqa commented on HADOOP-14759: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 31s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-14759 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916333/HADOOP-14759.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4ceef4a59b80 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c22d62b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14391/testReport/ | | Max. process+thread count | 344 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14391/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3GuardTool prune to prune specific bucket entries > -- > > Key: HADOOP-14759 > URL: ht
[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14445: --- Attachment: (was: HADOOP-14445.09.patch) > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14445: --- Attachment: HADOOP-14445.09.patch > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14758) S3GuardTool.prune to handle UnsupportedOperationException
[ https://issues.apache.org/jira/browse/HADOOP-14758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota reassigned HADOOP-14758: --- Assignee: Gabor Bota > S3GuardTool.prune to handle UnsupportedOperationException > - > > Key: HADOOP-14758 > URL: https://issues.apache.org/jira/browse/HADOOP-14758 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Trivial > > {{MetadataStore.prune()}} may throw {{UnsupportedOperationException}} if not > supported. > {{S3GuardTool.prune}} should recognise this, catch it and treat it > differently from any other failure, e.g. inform and return 0 as its a no-op -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15343: - Resolution: Duplicate Status: Resolved (was: Patch Available) > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries
[ https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14759: Status: Patch Available (was: Open) Updated documentation for prune at hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md > S3GuardTool prune to prune specific bucket entries > -- > > Key: HADOOP-14759 > URL: https://issues.apache.org/jira/browse/HADOOP-14759 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, > HADOOP-14759.003.patch, HADOOP-14759.004.patch > > > Users may think that when you provide a URI to a bucket, you are pruning all > entries in the table *for that bucket*. In fact you are purging all entries > across all buckets in the table: > {code} > hadoop s3guard prune -days 7 s3a://ireland-1 > {code} > It should be restricted to that bucket, unless you specify otherwise > +maybe also add a hard date rather than a relative one -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries
[ https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14759: Attachment: HADOOP-14759.004.patch > S3GuardTool prune to prune specific bucket entries > -- > > Key: HADOOP-14759 > URL: https://issues.apache.org/jira/browse/HADOOP-14759 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, > HADOOP-14759.003.patch, HADOOP-14759.004.patch > > > Users may think that when you provide a URI to a bucket, you are pruning all > entries in the table *for that bucket*. In fact you are purging all entries > across all buckets in the table: > {code} > hadoop s3guard prune -days 7 s3a://ireland-1 > {code} > It should be restricted to that bucket, unless you specify otherwise > +maybe also add a hard date rather than a relative one -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415003#comment-16415003 ] He Xiaoqiao commented on HADOOP-15343: -- Thanks [~shahrs87] [~ajayydv], it makes sense for me to backport HADOOP-12185 to branch-2.7 and it improves the performance more significantly based on the test result in HADOOP-12185. If there is not related issues for HADOOP-12185 backport, I will create one to follow later. Thanks again. > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-15336) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HADOOP-15336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-15336 started by Sherwood Zheng. --- > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HADOOP-15336 > URL: https://issues.apache.org/jira/browse/HADOOP-15336 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > Labels: backward-incompatible, common > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414864#comment-16414864 ] genericqa commented on HADOOP-14445: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 7s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-kms in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 59s{color} | {color:orange} hadoop-common-project: The patch generated 6 new + 286 unchanged - 5 fixed = 292 total (was 291) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 11s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 10s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-14445 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916295/HADOOP-14445.09.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 87f1938c0cbe 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c22d62b | | maven
[jira] [Commented] (HADOOP-15313) TestKMS should close providers
[ https://issues.apache.org/jira/browse/HADOOP-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414778#comment-16414778 ] Hudson commented on HADOOP-15313: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13884 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13884/]) HADOOP-15313. TestKMS should close providers. (xiao: rev c22d62b338cb16d93c4576a9c634041e3610a116) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MultipleIOException.java * (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java > TestKMS should close providers > -- > > Key: HADOOP-15313 > URL: https://issues.apache.org/jira/browse/HADOOP-15313 > Project: Hadoop Common > Issue Type: Test > Components: kms, test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15313.01.patch, HADOOP-15313.02.patch, > HADOOP-15313.03.patch > > > During the review of HADOOP-14445, [~jojochuang] found that we key providers > are not closed in tests. Details in [this > comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16397824&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16397824]. > We should investigate and handle that in all related tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-12862: - Labels: release-blocker (was: ) Target Version/s: 2.7.6 Marked as blocker for 2.7.6, because the feature doesn't work without this. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414765#comment-16414765 ] Konstantin Shvachko commented on HADOOP-12862: -- I see what you mean now. The same name {{hadoop.security.group.mapping.ldap.ssl.truststore.password}} used both as a config variable and as a key in a {{CredentialProvider}}. We want to remove the former, but keep the latter. I think making {{Configuration.getPasswordFromCredentialProviders()}} public is the right direction. Then we explicitly will not fall back to reading from configuration for truststore. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414729#comment-16414729 ] Xiao Chen edited comment on HADOOP-14445 at 3/26/18 11:01 PM: -- [^HADOOP-14445.09.patch] should address all comments from Rushabh, exceptions below. Regarding TestKMS, {quote} 2. providersCreated: {quote} I disagree because even in tests we should code against interface. It's implementation detail that {{createProvider}} only returns the KMSCP subclass of KeyProvider, and the Test code should just handle KeyProvider for cleanness. This has been split out to HADOOP-15313 to limit the scope, let's move further discussions there or feel free to file follow-ons. Agreed {{LoadBalancingKMSCP#close}} should throw instead of swallow - feels like a bug. Created HADOOP-15344 for that. {quote}4. testTokenCompatibilityOldRenewer {quote} The reason for not choosing a shorter amount of time is after the renewal, we want to authenticate using that token to all KMS instances. While a small renew interval would mean less wait, it also poses higher risks of flaky test failures if the authentication did not run within that time. Jenkins slaves are usually unreliable. Ideally one should find a way to haul into the secret manager, and change intervals from the test - but that seems pretty messy to do so left as-is. Let me know what you think. Also updated the test to verify it actually works with every KMCSP inside LBKMSCP. was (Author: xiaochen): [^HADOOP-14445.09.patch] should address all comments from Rushabh, exceptions below. Regarding TestKMS, bq. 2. providersCreated: I disagree because even in tests we should code against interface. It's implementation detail that {{createProvider}} only returns the KMSCP subclass of KeyProvider, and the Test code should just handle KeyProvider for cleanness. Agreed {{LoadBalancingKMSCP#close}} should throw instead of swallow - feels like a bug. Created HADOOP-15344 for that. bq. 4. testTokenCompatibilityOldRenewer The reason for not choosing a shorter amount of time is after the renewal, we want to authenticate using that token to all KMS instances. While a small renew interval would mean less wait, it also poses higher risks of flaky test failures if the authentication did not run within that time. Jenkins slaves are usually unreliable. Ideally one should find a way to haul into the secret manager, and change intervals from the test - but that seems pretty messy to do so left as-is. Let me know what you think. Also updated the test to verify it actually works with every KMCSP inside LBKMSCP. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15313) TestKMS should close providers
[ https://issues.apache.org/jira/browse/HADOOP-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15313: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks for the review Wei-Chiu! > TestKMS should close providers > -- > > Key: HADOOP-15313 > URL: https://issues.apache.org/jira/browse/HADOOP-15313 > Project: Hadoop Common > Issue Type: Test > Components: kms, test >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15313.01.patch, HADOOP-15313.02.patch, > HADOOP-15313.03.patch > > > During the review of HADOOP-14445, [~jojochuang] found that we key providers > are not closed in tests. Details in [this > comment|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16397824&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16397824]. > We should investigate and handle that in all related tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414729#comment-16414729 ] Xiao Chen commented on HADOOP-14445: [^HADOOP-14445.09.patch] should address all comments from Rushabh, exceptions below. Regarding TestKMS, bq. 2. providersCreated: I disagree because even in tests we should code against interface. It's implementation detail that {{createProvider}} only returns the KMSCP subclass of KeyProvider, and the Test code should just handle KeyProvider for cleanness. Agreed {{LoadBalancingKMSCP#close}} should throw instead of swallow - feels like a bug. Created HADOOP-15344 for that. bq. 4. testTokenCompatibilityOldRenewer The reason for not choosing a shorter amount of time is after the renewal, we want to authenticate using that token to all KMS instances. While a small renew interval would mean less wait, it also poses higher risks of flaky test failures if the authentication did not run within that time. Jenkins slaves are usually unreliable. Ideally one should find a way to haul into the secret manager, and change intervals from the test - but that seems pretty messy to do so left as-is. Let me know what you think. Also updated the test to verify it actually works with every KMCSP inside LBKMSCP. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15344) LoadBalancingKMSClientProvider#close should not swallow exceptions
Xiao Chen created HADOOP-15344: -- Summary: LoadBalancingKMSClientProvider#close should not swallow exceptions Key: HADOOP-15344 URL: https://issues.apache.org/jira/browse/HADOOP-15344 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: Xiao Chen As [~shahrs87]'s comment on HADOOP-14445 says: {quote} LoadBalancingKMSCP never throws IOException back. It just swallows all the IOException and just logs it. ... Maybe we might want to return MultipleIOException from LoadBalancingKMSCP#close. {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.
[ https://issues.apache.org/jira/browse/HADOOP-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414720#comment-16414720 ] Larry McCay commented on HADOOP-15325: -- All of that said, if you did want to codify that decision, you could just make getPasswordFromCredentialsProvider method public and use that directly. > Add an option to make Configuration.getPassword() not to fallback to read > passwords from configuration. > --- > > Key: HADOOP-15325 > URL: https://issues.apache.org/jira/browse/HADOOP-15325 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Zsolt Venczel >Priority: Major > > HADOOP-10607 added a public API Configuration.getPassword() which reads > passwords from credential provider and then falls back to reading from > configuration if one is not available. > This API has been used throughout Hadoop codebase and downstream > applications. It is understandable for old password configuration keys to > fallback to configuration to maintain backward compatibility. But for new > configuration passwords that don't have legacy, there should be an option to > _not_ fallback, because storing passwords in configuration is considered a > bad security practice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14445: --- Attachment: HADOOP-14445.09.patch > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.
[ https://issues.apache.org/jira/browse/HADOOP-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414715#comment-16414715 ] Larry McCay commented on HADOOP-15325: -- [~shv] - It took me a few mins to understand your perspective. I think that you are viewing the use of getPassword as a mechanism only to be used for old passwords that used to be stored in configuration and that any new ones should use the credential provider API directly instead. My view was that Configuration.getPassword is a nice convenient utility for accessing passwords that may be stored physically in the config file or elsewhere securely via credential provider API. I think both perspectives are probably valid but it seems somewhat unreasonable to make code that may already be acquiring passwords through getPassword use a different mechanism just because the passwords are newer and don't need backward compatibility. I can see a list of property names that allow clear text storage for backward compatibility or other reasons being an easy way to: * limit newer properties to only secure storage * allow new deployments rather than upgrades to turn off clear text storage completely * migrate to no clear text over time Codifying this deployment specific behavior to make the decisions for them seems too inflexible. Considering that we already have a configurable fallback behavior, extending this to treat some properties differently from a deployment consideration seems to make sense to me and would allow you to achieve exactly the behavior that you describe without making the decision for others. {code} /** * @see * * core-default.xml */ public static final String HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK = "hadoop.security.credential.clear-text-fallback"; public static final boolean HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK_DEFAULT = true; {code} The above is used in the getPasswordFromConfig method: {code} /** * Fallback to clear text passwords in configuration. * @param name * @return clear text password or null */ protected char[] getPasswordFromConfig(String name) { char[] pass = null; if (getBoolean(CredentialProvider.CLEAR_TEXT_FALLBACK, CommonConfigurationKeysPublic.HADOOP_SECURITY_CREDENTIAL_CLEAR_TEXT_FALLBACK_DEFAULT)) { String passStr = get(name); if (passStr != null) { pass = passStr.toCharArray(); } } return pass; } {code} Check a property name list after the overall fallback enabled check and you are good to go. We may want to have some special values like ALL|NONE and also allow for a list of property names that don't allow fallback. Default to ALL for current behavior. Very strict environments can set it to NONE and others can choose the new properties vs old properties approach or whatever they like. > Add an option to make Configuration.getPassword() not to fallback to read > passwords from configuration. > --- > > Key: HADOOP-15325 > URL: https://issues.apache.org/jira/browse/HADOOP-15325 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Zsolt Venczel >Priority: Major > > HADOOP-10607 added a public API Configuration.getPassword() which reads > passwords from credential provider and then falls back to reading from > configuration if one is not available. > This API has been used throughout Hadoop codebase and downstream > applications. It is understandable for old password configuration keys to > fallback to configuration to maintain backward compatibility. But for new > configuration passwords that don't have legacy, there should be an option to > _not_ fallback, because storing passwords in configuration is considered a > bad security practice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15299) Bump Hadoop's Jackson 2 dependency 2.9.x
[ https://issues.apache.org/jira/browse/HADOOP-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15299: --- Fix Version/s: 3.2.0 > Bump Hadoop's Jackson 2 dependency 2.9.x > > > Key: HADOOP-15299 > URL: https://issues.apache.org/jira/browse/HADOOP-15299 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15299.001.patch > > > There are a few new CVEs open against Jackson 2.7.x. It doesn't (necessarily) > mean Hadoop is vulnerable to the attack - I don't know that it is, but fixes > were released for Jackson 2.8.x and 2.9.x but not 2.7.x (which we're on). We > shouldn't be on an unmaintained line, regardless. HBase is already on 2.9.x, > we have a shaded client now, the API changes are relatively minor and so far > in my testing I haven't seen any problems. I think many of our usual reasons > to hesitate upgrading this dependency don't apply. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15299) Bump Hadoop's Jackson 2 dependency 2.9.x
[ https://issues.apache.org/jira/browse/HADOOP-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15299: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Bump Hadoop's Jackson 2 dependency 2.9.x > > > Key: HADOOP-15299 > URL: https://issues.apache.org/jira/browse/HADOOP-15299 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Attachments: HADOOP-15299.001.patch > > > There are a few new CVEs open against Jackson 2.7.x. It doesn't (necessarily) > mean Hadoop is vulnerable to the attack - I don't know that it is, but fixes > were released for Jackson 2.8.x and 2.9.x but not 2.7.x (which we're on). We > shouldn't be on an unmaintained line, regardless. HBase is already on 2.9.x, > we have a shaded client now, the API changes are relatively minor and so far > in my testing I haven't seen any problems. I think many of our usual reasons > to hesitate upgrading this dependency don't apply. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414630#comment-16414630 ] Ajay Kumar edited comment on HADOOP-15317 at 3/26/18 10:01 PM: --- [~xiaochen], thanks for updating the patch. New change to handle the best case increases the probability of initial nodes being chosen. This results in sporadic failure of new test cases. {code} Test failure java.lang.AssertionError: excludedNodes: [5.5.5.5:9866, 2.2.2.2:9866, 3.3.3.3:9866] result:{19.19.19.19:9866=0, 10.10.10.10:9866=0, 17.17.17.17:9866=0, 12.12.12.12:9866=0, 9.9.9.9:9866=0, 11.11.11.11:9866=0, 6.6.6.6:9866=0, 1.1.1.1:9866=100, 20.20.20.20:9866=0, 4.4.4.4:9866=0, 5.5.5.5:9866=0, 2.2.2.2:9866=0, 8.8.8.8:9866=0, 14.14.14.14:9866=0, 3.3.3.3:9866=0, 7.7.7.7:9866=0, 13.13.13.13:9866=0, 18.18.18.18:9866=0, 15.15.15.15:9866=0, 16.16.16.16:9866=0} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.net.TestNetworkTopology.verifyResults(TestNetworkTopology.java:523) at org.apache.hadoop.net.TestNetworkTopology.testChooseRandomInclude1(TestNetworkTopology.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} This is result of bounding next int to available nodes {{r.nextInt(availableNodes);}} as availableNodes <= parentNode.getNumOfLeaves(). One way to avoid this is to choose nextInt twice as suggested in my last comment. For debugging purpose it would be good to include {{excludedNodes}} in assert conditions. {{L521/523 TestNetworkTopology#verifyResults}} was (Author: ajayydv): [~xiaochen], thanks for updating the patch. New change to handle the best case increases the probability of initial nodes being chosen. This results in sporadic failure of new test cases. {code} Test failure java.lang.AssertionError: excludedNodes: [5.5.5.5:9866, 2.2.2.2:9866, 3.3.3.3:9866] result:{19.19.19.19:9866=0, 10.10.10.10:9866=0, 17.17.17.17:9866=0, 12.12.12.12:9866=0, 9.9.9.9:9866=0, 11.11.11.11:9866=0, 6.6.6.6:9866=0, 1.1.1.1:9866=100, 20.20.20.20:9866=0, 4.4.4.4:9866=0, 5.5.5.5:9866=0, 2.2.2.2:9866=0, 8.8.8.8:9866=0, 14.14.14.14:9866=0, 3.3.3.3:9866=0, 7.7.7.7:9866=0, 13.13.13.13:9866=0, 18.18.18.18:9866=0, 15.15.15.15:9866=0, 16.16.16.16:9866=0} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.net.TestNetworkTopology.verifyResults(TestNetworkTopology.java:523) at org.apache.hadoop.net.TestNetworkTopology.testChooseRandomInclude1(TestNetworkTopology.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} This is result of bounding next int to available nodes {{r.nextInt(availableNodes);}} as availableNodes <= parentNode.getNumOfLeaves(). One way to avoid this is to choose nextInt twice as suggested in my last comment. For debugging purpose it would be good to include {{excludedNodes}} in assert conditions. {{L521/523 TestNetworkTopology#verifyResults}} > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issue
[jira] [Comment Edited] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414630#comment-16414630 ] Ajay Kumar edited comment on HADOOP-15317 at 3/26/18 9:59 PM: -- [~xiaochen], thanks for updating the patch. New change to handle the best case increases the probability of initial nodes being chosen. This results in sporadic failure of new test cases. {code} Test failure java.lang.AssertionError: excludedNodes: [5.5.5.5:9866, 2.2.2.2:9866, 3.3.3.3:9866] result:{19.19.19.19:9866=0, 10.10.10.10:9866=0, 17.17.17.17:9866=0, 12.12.12.12:9866=0, 9.9.9.9:9866=0, 11.11.11.11:9866=0, 6.6.6.6:9866=0, 1.1.1.1:9866=100, 20.20.20.20:9866=0, 4.4.4.4:9866=0, 5.5.5.5:9866=0, 2.2.2.2:9866=0, 8.8.8.8:9866=0, 14.14.14.14:9866=0, 3.3.3.3:9866=0, 7.7.7.7:9866=0, 13.13.13.13:9866=0, 18.18.18.18:9866=0, 15.15.15.15:9866=0, 16.16.16.16:9866=0} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.net.TestNetworkTopology.verifyResults(TestNetworkTopology.java:523) at org.apache.hadoop.net.TestNetworkTopology.testChooseRandomInclude1(TestNetworkTopology.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} This is result of bounding next int to available nodes {{r.nextInt(availableNodes);}} as availableNodes <= parentNode.getNumOfLeaves(). One way to avoid this is to choose nextInt twice as suggested in my last comment. For debugging purpose it would be good to include {{excludedNodes}} in assert conditions. {{L521/523 TestNetworkTopology#verifyResults}} was (Author: ajayydv): [~xiaochen], thanks for updating the patch. New change to handle the best case increases the probability of initial nodes being chosen. This results in sporadic failure of new test cases. This is result of bounding next int to available nodes {{r.nextInt(availableNodes);}} as availableNodes <= parentNode.getNumOfLeaves(). {code} Test failure java.lang.AssertionError: excludedNodes: [5.5.5.5:9866, 2.2.2.2:9866, 3.3.3.3:9866] result:{19.19.19.19:9866=0, 10.10.10.10:9866=0, 17.17.17.17:9866=0, 12.12.12.12:9866=0, 9.9.9.9:9866=0, 11.11.11.11:9866=0, 6.6.6.6:9866=0, 1.1.1.1:9866=100, 20.20.20.20:9866=0, 4.4.4.4:9866=0, 5.5.5.5:9866=0, 2.2.2.2:9866=0, 8.8.8.8:9866=0, 14.14.14.14:9866=0, 3.3.3.3:9866=0, 7.7.7.7:9866=0, 13.13.13.13:9866=0, 18.18.18.18:9866=0, 15.15.15.15:9866=0, 16.16.16.16:9866=0} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.net.TestNetworkTopology.verifyResults(TestNetworkTopology.java:523) at org.apache.hadoop.net.TestNetworkTopology.testChooseRandomInclude1(TestNetworkTopology.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} One way to avoid this is to choose nextInt twice as suggested in my last comment. For debugging purpose it would be good to include {{excludedNodes}} in assert conditions. {{L521/523 TestNetworkTopology#verifyResults}} > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.
[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414630#comment-16414630 ] Ajay Kumar commented on HADOOP-15317: - [~xiaochen], thanks for updating the patch. New change to handle the best case increases the probability of initial nodes being chosen. This results in sporadic failure of new test cases. This is result of bounding next int to available nodes {{r.nextInt(availableNodes);}} as availableNodes <= parentNode.getNumOfLeaves(). {code} Test failure java.lang.AssertionError: excludedNodes: [5.5.5.5:9866, 2.2.2.2:9866, 3.3.3.3:9866] result:{19.19.19.19:9866=0, 10.10.10.10:9866=0, 17.17.17.17:9866=0, 12.12.12.12:9866=0, 9.9.9.9:9866=0, 11.11.11.11:9866=0, 6.6.6.6:9866=0, 1.1.1.1:9866=100, 20.20.20.20:9866=0, 4.4.4.4:9866=0, 5.5.5.5:9866=0, 2.2.2.2:9866=0, 8.8.8.8:9866=0, 14.14.14.14:9866=0, 3.3.3.3:9866=0, 7.7.7.7:9866=0, 13.13.13.13:9866=0, 18.18.18.18:9866=0, 15.15.15.15:9866=0, 16.16.16.16:9866=0} at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.net.TestNetworkTopology.verifyResults(TestNetworkTopology.java:523) at org.apache.hadoop.net.TestNetworkTopology.testChooseRandomInclude1(TestNetworkTopology.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} One way to avoid this is to choose nextInt twice as suggested in my last comment. For debugging purpose it would be good to include {{excludedNodes}} in assert conditions. {{L521/523 TestNetworkTopology#verifyResults}} > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.apache.org/jira/browse/HADOOP-15317 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, > HADOOP-15317.03.patch > > > Recently we found a postmortem case where the ANN seems to be in an infinite > loop. From the logs it seems it just went through a rolling restart, and DNs > are getting registered. > Later the NN become unresponsive, and from the stacktrace it's inside a > do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done > in HDFS-10320. > Going through the code and logs I'm not able to come up with any theory > (thought about incorrect locking, or the Node object being modified outside > of NetworkTopology, both seem impossible) why this is happening, but we > should eliminate this loop. > stacktrace: > {noformat} > Stack: > java.util.HashMap.hash(HashMap.java:338) > java.util.HashMap.containsKey(HashMap.java:595) > java.util.HashSet.contains(HashSet.java:203) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115) > org.apache.hadoop.hdfs.server.blockmanagement.Block
[jira] [Commented] (HADOOP-15299) Bump Hadoop's Jackson 2 dependency 2.9.x
[ https://issues.apache.org/jira/browse/HADOOP-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414593#comment-16414593 ] Sean Mackrory commented on HADOOP-15299: The failure is unrelated - most commits since this morning have failed because the wrong version of protoc is installed. Will start a mailing list thread as I don't immediately see one already. > Bump Hadoop's Jackson 2 dependency 2.9.x > > > Key: HADOOP-15299 > URL: https://issues.apache.org/jira/browse/HADOOP-15299 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Attachments: HADOOP-15299.001.patch > > > There are a few new CVEs open against Jackson 2.7.x. It doesn't (necessarily) > mean Hadoop is vulnerable to the attack - I don't know that it is, but fixes > were released for Jackson 2.8.x and 2.9.x but not 2.7.x (which we're on). We > shouldn't be on an unmaintained line, regardless. HBase is already on 2.9.x, > we have a shaded client now, the API changes are relatively minor and so far > in my testing I haven't seen any problems. I think many of our usual reasons > to hesitate upgrading this dependency don't apply. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414581#comment-16414581 ] Wei-Chiu Chuang edited comment on HADOOP-12862 at 3/26/18 9:23 PM: --- Hi Konstantin, There's probably a misunderstanding here. I fully agree that storing passwords in config files is a bad thing, so I proposed in HADOOP-15325 to add an option to NOT getting passwords in the config. Once HADOOP-15325 is in, I can update this patch and that it won't read passwords from config files, but from credential files. Or, make Configuration#getPasswordFromCredentialProviders() public instead of protected. That way I can update this patch to call Configuration#getPasswordFromCredentialProviders() to read from credential files. was (Author: jojochuang): Hi Konstantin, There's probably a misunderstanding here. I fully agree that storing passwords in config files is a bad thing, so I proposed in HADOOP-15325 to add an option to NOT getting passwords in the config. Once HADOOP-15325 is in, I can update this patch and that it won't read passwords from config files, but from credential files. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414581#comment-16414581 ] Wei-Chiu Chuang commented on HADOOP-12862: -- Hi Konstantin, There's probably a misunderstanding here. I fully agree that storing passwords in config files is a bad thing, so I proposed in HADOOP-15325 to add an option to NOT getting passwords in the config. Once HADOOP-15325 is in, I can update this patch and that it won't read passwords from config files, but from credential files. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch, HADOOP-12862.008.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15325) Add an option to make Configuration.getPassword() not to fallback to read passwords from configuration.
[ https://issues.apache.org/jira/browse/HADOOP-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414568#comment-16414568 ] Konstantin Shvachko commented on HADOOP-15325: -- My comment from HADOOP-12862. I don't think this makes sense. It is like adding an optional option to ignore an optional parameter. People should just NOT put passwords in configs. We tolerate previously introduced password parameters for backward compatibility. But we should not add new password fields into configs. > Add an option to make Configuration.getPassword() not to fallback to read > passwords from configuration. > --- > > Key: HADOOP-15325 > URL: https://issues.apache.org/jira/browse/HADOOP-15325 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Zsolt Venczel >Priority: Major > > HADOOP-10607 added a public API Configuration.getPassword() which reads > passwords from credential provider and then falls back to reading from > configuration if one is not available. > This API has been used throughout Hadoop codebase and downstream > applications. It is understandable for old password configuration keys to > fallback to configuration to maintain backward compatibility. But for new > configuration passwords that don't have legacy, there should be an option to > _not_ fallback, because storing passwords in configuration is considered a > bad security practice. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414548#comment-16414548 ] Elek, Marton commented on HADOOP-15339: --- FTR: Trunk jenkins failure seems to be independent: {code} [ERROR] Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:3.2.0-SNAPSHOT:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' -> [Help 1] {code} It may be scheduled on a wrong node. The next build was good: https://builds.apache.org/job/Hadoop-trunk-Commit/13879/ > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15317: Comment: was deleted (was: Hi [~xiaochen], Seems you missed adding "special case to handle the best case scenario as an improvement. " in patch v3.) > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.apache.org/jira/browse/HADOOP-15317 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, > HADOOP-15317.03.patch > > > Recently we found a postmortem case where the ANN seems to be in an infinite > loop. From the logs it seems it just went through a rolling restart, and DNs > are getting registered. > Later the NN become unresponsive, and from the stacktrace it's inside a > do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done > in HDFS-10320. > Going through the code and logs I'm not able to come up with any theory > (thought about incorrect locking, or the Node object being modified outside > of NetworkTopology, both seem impossible) why this is happening, but we > should eliminate this loop. > stacktrace: > {noformat} > Stack: > java.util.HashMap.hash(HashMap.java:338) > java.util.HashMap.containsKey(HashMap.java:595) > java.util.HashSet.contains(HashSet.java:203) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115) > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599) > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414507#comment-16414507 ] Ajay Kumar commented on HADOOP-15317: - Hi [~xiaochen], Seems you missed adding "special case to handle the best case scenario as an improvement. " in patch v3. > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.apache.org/jira/browse/HADOOP-15317 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-15317.01.patch, HADOOP-15317.02.patch, > HADOOP-15317.03.patch > > > Recently we found a postmortem case where the ANN seems to be in an infinite > loop. From the logs it seems it just went through a rolling restart, and DNs > are getting registered. > Later the NN become unresponsive, and from the stacktrace it's inside a > do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done > in HDFS-10320. > Going through the code and logs I'm not able to come up with any theory > (thought about incorrect locking, or the Node object being modified outside > of NetworkTopology, both seem impossible) why this is happening, but we > should eliminate this loop. > stacktrace: > {noformat} > Stack: > java.util.HashMap.hash(HashMap.java:338) > java.util.HashMap.containsKey(HashMap.java:595) > java.util.HashSet.contains(HashSet.java:203) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115) > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599) > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15299) Bump Hadoop's Jackson 2 dependency 2.9.x
[ https://issues.apache.org/jira/browse/HADOOP-15299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414482#comment-16414482 ] Hudson commented on HADOOP-15299: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13880 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13880/]) HADOOP-15299. Bump Jackson 2 version to Jackson 2.9.x. (mackrorysd: rev 82665a7887a4bbb3afbc257bec31089173f3a969) * (edit) hadoop-project/pom.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/PluginStoreTestUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestLogInfo.java * (edit) hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/state/StatePool.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LogInfo.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java * (edit) hadoop-tools/hadoop-rumen/src/test/java/org/apache/hadoop/tools/rumen/TestHistograms.java > Bump Hadoop's Jackson 2 dependency 2.9.x > > > Key: HADOOP-15299 > URL: https://issues.apache.org/jira/browse/HADOOP-15299 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Major > Attachments: HADOOP-15299.001.patch > > > There are a few new CVEs open against Jackson 2.7.x. It doesn't (necessarily) > mean Hadoop is vulnerable to the attack - I don't know that it is, but fixes > were released for Jackson 2.8.x and 2.9.x but not 2.7.x (which we're on). We > shouldn't be on an unmaintained line, regardless. HBase is already on 2.9.x, > we have a shaded client now, the API changes are relatively minor and so far > in my testing I haven't seen any problems. I think many of our usual reasons > to hesitate upgrading this dependency don't apply. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414457#comment-16414457 ] Ajay Kumar commented on HADOOP-15343: - +1, Would be great if we can add a test case. (Since getLoc is private may be can use getNode) Missed [~shahrs87] suggestion earlier. +1 for back-porting [HADOOP-12185] as it has additional improvements as well. > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414335#comment-16414335 ] Ajay Kumar edited comment on HADOOP-15343 at 3/26/18 7:04 PM: -- +1, Would be great if we can add a test case. (Since getLoc is private may be can use getNode) was (Author: ajayydv): +1 > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414335#comment-16414335 ] Ajay Kumar commented on HADOOP-15343: - +1 > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414307#comment-16414307 ] genericqa commented on HADOOP-15343: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} branch-2.7 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 65 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 23s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestCallQueueManager | | | hadoop.ipc.TestDecayRpcScheduler | | | hadoop.util.bloom.TestBloomFilters | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:06eafee | | JIRA Issue | HADOOP-15343 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916231/HADOOP-15343-branch-2.7.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cd00829f874f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.7 / 85502d3 | | maven | version: Apache Maven 3.0.5 | | Default Java | 1.7.0_151 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/14389/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/14389/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14389/testReport/ | | Max. process+thread count | 544 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14389/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated.
[jira] [Updated] (HADOOP-15188) azure datalake AzureADAuthenticator failing, no error info provided
[ https://issues.apache.org/jira/browse/HADOOP-15188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15188: Priority: Minor (was: Major) > azure datalake AzureADAuthenticator failing, no error info provided > --- > > Key: HADOOP-15188 > URL: https://issues.apache.org/jira/browse/HADOOP-15188 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0, 3.1.0 >Reporter: Steve Loughran >Assignee: Atul Sikaria >Priority: Minor > > Get a failure in ADLS client, but nothing useful in terms of failure > description > {code} > DEBUG oauth2.AzureADAuthenticator: AADToken: starting to fetch token using > client creds for client ID > DEBUG store.HttpTransport: > HTTPRequest,Failed,cReqId:,lat:127370,err:HTTP0(null),Reqlen:0,Resplen:0,token_ns:,sReqId:null,path:,qp:op=GETFILESTATUS&tooid=true&api-version=2016-11-01 > {code} > so: we had a failure but the response code is 0, error(null); "something > happened but we don't know what" > Looks like this log message is in the ADLS SDK, and can be translated like > this. > {code} > String logline = > "HTTPRequest," + outcome + > ",cReqId:" + opts.requestid + > ",lat:" + Long.toString(resp.lastCallLatency) + > ",err:" + error + > ",Reqlen:" + length + > ",Resplen:" + respLength + > ",token_ns:" + Long.toString(resp.tokenAcquisitionLatency) + > ",sReqId:" + resp.requestId + > ",path:" + path + > ",qp:" + queryParams.serialize(); > {code} > It looks like whatever code tries to parse the JSON response from the OAuth > service couldn't make sense of the response, and we end up with nothing back. > Not sure what can be done in hadoop to handle this, except maybe provide more > diags on request failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15188) azure datalake AzureADAuthenticator failing, no error info provided
[ https://issues.apache.org/jira/browse/HADOOP-15188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414257#comment-16414257 ] Steve Loughran commented on HADOOP-15188: - thanks, will mark this one as a duplicate > azure datalake AzureADAuthenticator failing, no error info provided > --- > > Key: HADOOP-15188 > URL: https://issues.apache.org/jira/browse/HADOOP-15188 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Affects Versions: 3.0.0, 3.1.0 >Reporter: Steve Loughran >Assignee: Atul Sikaria >Priority: Major > > Get a failure in ADLS client, but nothing useful in terms of failure > description > {code} > DEBUG oauth2.AzureADAuthenticator: AADToken: starting to fetch token using > client creds for client ID > DEBUG store.HttpTransport: > HTTPRequest,Failed,cReqId:,lat:127370,err:HTTP0(null),Reqlen:0,Resplen:0,token_ns:,sReqId:null,path:,qp:op=GETFILESTATUS&tooid=true&api-version=2016-11-01 > {code} > so: we had a failure but the response code is 0, error(null); "something > happened but we don't know what" > Looks like this log message is in the ADLS SDK, and can be translated like > this. > {code} > String logline = > "HTTPRequest," + outcome + > ",cReqId:" + opts.requestid + > ",lat:" + Long.toString(resp.lastCallLatency) + > ",err:" + error + > ",Reqlen:" + length + > ",Resplen:" + respLength + > ",token_ns:" + Long.toString(resp.tokenAcquisitionLatency) + > ",sReqId:" + resp.requestId + > ",path:" + path + > ",qp:" + queryParams.serialize(); > {code} > It looks like whatever code tries to parse the JSON response from the OAuth > service couldn't make sense of the response, and we end up with nothing back. > Not sure what can be done in hadoop to handle this, except maybe provide more > diags on request failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414246#comment-16414246 ] Rushabh S Shah commented on HADOOP-15343: - Does it make sense to backport HADOOP-12185 to branch-2.7 ? HADOOP-12185 is trying to solve the in-efficiencies in adding/getting/removing nodes from {{NetworkTopology}}. I understand this jira is subset of HADOOP-12185 but doesn't hurt getting more performance benefits. > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-15339: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) Thanks, [~elek] for the contribution. I've committed the patch to the trunk. > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414241#comment-16414241 ] Hudson commented on HADOOP-15339: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13878 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13878/]) HADOOP-15339. Support additional key/value propereties in JMX bean (xyao: rev 22194f3d21fd28b97c6197a8dd1917d3d23d7cc8) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/MBeans.java * (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/util/TestMBeans.java * (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/util/DummyMXBean.java > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414183#comment-16414183 ] genericqa commented on HADOOP-15339: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 55s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}108m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-15339 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916218/HADOOP-15339.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cdd6b6a435e6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cfc3a1c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14388/testReport/ | | Max. process+thread count | 1377 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14388/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support additional key/value propereties in JMX bea
[jira] [Updated] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15343: - Status: Patch Available (was: Open) submit patch v1 for branch-2.7 > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
[ https://issues.apache.org/jira/browse/HADOOP-15343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HADOOP-15343: - Attachment: HADOOP-15343-branch-2.7.001.patch > NetworkTopology#getLoc should exit loop earlier rather than traverse all > children > - > > Key: HADOOP-15343 > URL: https://issues.apache.org/jira/browse/HADOOP-15343 > Project: Hadoop Common > Issue Type: Improvement > Components: performance >Affects Versions: 2.7.6 >Reporter: He Xiaoqiao >Priority: Major > Labels: performance > Attachments: HADOOP-15343-branch-2.7.001.patch > > > NetworkTopology#getLoc return a proper node after traverse ALL children of > current {{InnerNode}} even if it has found expected result, based on > branch-2.7. This issue may lead some performance loss especially for a large > & busy cluster and many nodes under a rack. I think it should exit loop > earlier rather than traverse all children of {{InnerNode}}. > {code:java} > private Node getLoc(String loc) { > if (loc == null || loc.length() == 0) return this; > > String[] path = loc.split(PATH_SEPARATOR_STR, 2); > Node childnode = null; > for(int i=0; i if (children.get(i).getName().equals(path[0])) { > childnode = children.get(i); > } > } > if (childnode == null) return null; // non-existing node > if (path.length == 1) return childnode; > if (childnode instanceof InnerNode) { > return ((InnerNode)childnode).getLoc(path[1]); > } else { > return null; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15343) NetworkTopology#getLoc should exit loop earlier rather than traverse all children
He Xiaoqiao created HADOOP-15343: Summary: NetworkTopology#getLoc should exit loop earlier rather than traverse all children Key: HADOOP-15343 URL: https://issues.apache.org/jira/browse/HADOOP-15343 Project: Hadoop Common Issue Type: Improvement Components: performance Affects Versions: 2.7.6 Reporter: He Xiaoqiao NetworkTopology#getLoc return a proper node after traverse ALL children of current {{InnerNode}} even if it has found expected result, based on branch-2.7. This issue may lead some performance loss especially for a large & busy cluster and many nodes under a rack. I think it should exit loop earlier rather than traverse all children of {{InnerNode}}. {code:java} private Node getLoc(String loc) { if (loc == null || loc.length() == 0) return this; String[] path = loc.split(PATH_SEPARATOR_STR, 2); Node childnode = null; for(int i=0; i
[jira] [Updated] (HADOOP-14759) S3GuardTool prune to prune specific bucket entries
[ https://issues.apache.org/jira/browse/HADOOP-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-14759: Status: Open (was: Patch Available) > S3GuardTool prune to prune specific bucket entries > -- > > Key: HADOOP-14759 > URL: https://issues.apache.org/jira/browse/HADOOP-14759 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Attachments: HADOOP-14759.001.patch, HADOOP-14759.002.patch, > HADOOP-14759.003.patch > > > Users may think that when you provide a URI to a bucket, you are pruning all > entries in the table *for that bucket*. In fact you are purging all entries > across all buckets in the table: > {code} > hadoop s3guard prune -days 7 s3a://ireland-1 > {code} > It should be restricted to that bucket, unless you specify otherwise > +maybe also add a hard date rather than a relative one -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414050#comment-16414050 ] Xiaoyu Yao commented on HADOOP-15339: - Thanks [~elek] for the update. Patch v3 looks good, +1 pending Jenkins. > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-15339: -- Attachment: HADOOP-15339.003.patch > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414026#comment-16414026 ] Elek, Marton commented on HADOOP-15339: --- Thanks [~xyao] to review it. I added the Preconditions.checkNotNull as you suggested. The private constructor is suggested by an active checkstyle rule: {code} src/main/java/org/apache/hadoop/metrics2/util/MBeans.java:[45,1] (design) HideUtilityClassConstructor: Utility classes should not have a public or default constructor.{code} {code} Without that I will get one more checkstlyle warning. (And it's reasonable: a utility class with full of static methods shouldn't been instantiated). > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16414026#comment-16414026 ] Elek, Marton edited comment on HADOOP-15339 at 3/26/18 3:43 PM: Thanks [~xyao] to review it. I added the Preconditions.checkNotNull as you suggested. The private constructor is suggested by an active checkstyle rule: {code} src/main/java/org/apache/hadoop/metrics2/util/MBeans.java:[45,1] (design) HideUtilityClassConstructor: Utility classes should not have a public or default constructor.{code} Without that I will get one more checkstlyle warning. (And it's reasonable: a utility class with full of static methods shouldn't been instantiated). was (Author: elek): Thanks [~xyao] to review it. I added the Preconditions.checkNotNull as you suggested. The private constructor is suggested by an active checkstyle rule: {code} src/main/java/org/apache/hadoop/metrics2/util/MBeans.java:[45,1] (design) HideUtilityClassConstructor: Utility classes should not have a public or default constructor.{code} {code} Without that I will get one more checkstlyle warning. (And it's reasonable: a utility class with full of static methods shouldn't been instantiated). > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys
[ https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16413786#comment-16413786 ] genericqa commented on HADOOP-15312: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 57m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 7m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 46s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HADOOP-15312 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916177/HADOOP-15312.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 98a1f41e471e 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cfc3a1c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/14387/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14387/testReport/ | | Max. process+thread count | 1394 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14387/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Undocumented KeyProvider configuration keys > --- > > Key: HADOOP-15312 > URL: https://issues.apache.org/jira/browse/HADOOP-15312 >
[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys
[ https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HADOOP-15312: -- Status: Open (was: Patch Available) > Undocumented KeyProvider configuration keys > --- > > Key: HADOOP-15312 > URL: https://issues.apache.org/jira/browse/HADOOP-15312 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: LiXin Ge >Priority: Major > Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch > > > Via HADOOP-14445, I found two undocumented configuration keys: > hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys
[ https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HADOOP-15312: -- Status: Patch Available (was: Open) > Undocumented KeyProvider configuration keys > --- > > Key: HADOOP-15312 > URL: https://issues.apache.org/jira/browse/HADOOP-15312 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: LiXin Ge >Priority: Major > Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch > > > Via HADOOP-14445, I found two undocumented configuration keys: > hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15312) Undocumented KeyProvider configuration keys
[ https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16413684#comment-16413684 ] LiXin Ge commented on HADOOP-15312: --- Add a new patch to describe the configuration keys better. Hi [~jojochuang], any suggestion for this? thanks! > Undocumented KeyProvider configuration keys > --- > > Key: HADOOP-15312 > URL: https://issues.apache.org/jira/browse/HADOOP-15312 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: LiXin Ge >Priority: Major > Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch > > > Via HADOOP-14445, I found two undocumented configuration keys: > hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15312) Undocumented KeyProvider configuration keys
[ https://issues.apache.org/jira/browse/HADOOP-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HADOOP-15312: -- Attachment: HADOOP-15312.002.patch > Undocumented KeyProvider configuration keys > --- > > Key: HADOOP-15312 > URL: https://issues.apache.org/jira/browse/HADOOP-15312 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: LiXin Ge >Priority: Major > Attachments: HADOOP-15312.001.patch, HADOOP-15312.002.patch > > > Via HADOOP-14445, I found two undocumented configuration keys: > hadoop.security.key.default.bitlength and hadoop.security.key.default.cipher -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org