[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls
[ https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315235#comment-15315235 ] Hudson commented on HADOOP-12957: - SUCCESS: Integrated in Hadoop-trunk-Commit #9913 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9913/]) Revert "HADOOP-12957. Limit the number of outstanding async calls. (wang: rev 4d36b221a24e3b626bb91093b0bb0fd377061cae) * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AsyncDistributedFileSystem.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/AsyncCallLimitExceededException.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFSRename.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java > Limit the number of outstanding async calls > --- > > Key: HADOOP-12957 > URL: https://issues.apache.org/jira/browse/HADOOP-12957 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: HDFS-9924 > > Attachments: HADOOP-12957-HADOOP-12909.000.patch, > HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, > HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, > HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, > HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch > > > In async RPC, if the callers don't read replies fast enough, the buffer > storing replies could be used up. This is to propose limiting the number of > outstanding async calls to eliminate the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13226) Support async call retry and failover
[ https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315236#comment-15315236 ] Hudson commented on HADOOP-13226: - SUCCESS: Integrated in Hadoop-trunk-Commit #9913 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9913/]) Revert "HADOOP-13226 Support async call retry and failover." (wang: rev 5360da8bd9f720384860f411bee081aef13b4bd4) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/AsyncCallHandler.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java * hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/CallReturn.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AsyncDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFS.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncHDFSWithHA.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/AsyncGet.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java > Support async call retry and failover > - > > Key: HADOOP-13226 > URL: https://issues.apache.org/jira/browse/HADOOP-13226 > Project: Hadoop Common > Issue Type: New Feature > Components: io, ipc >Reporter: Xiaobing Zhou >Assignee: Tsz Wo Nicholas Sze > Fix For: HDFS-9924 > > Attachments: h10433_20160524.patch, h10433_20160525.patch, > h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, > h10433_20160528c.patch > > > In current Async DFS implementation, file system calls are invoked and > returns Future immediately to clients. Clients call Future#get to retrieve > final results. Future#get internally invokes a chain of callbacks residing in > ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The > callback path bypasses the original retry layer/logic designed for > synchronous DFS. This proposes refactoring to make retry also works for Async > DFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13168) Support Future.get with timeout in ipc async calls
[ https://issues.apache.org/jira/browse/HADOOP-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315230#comment-15315230 ] Hudson commented on HADOOP-13168: - SUCCESS: Integrated in Hadoop-trunk-Commit #9913 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9913/]) Revert "HADOOP-13168. Support Future.get with timeout in ipc async (wang: rev e4450d47f19131818e1c040b6bd8d85ae8250475) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AsyncDistributedFileSystem.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/AsyncGet.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/AsyncGetFuture.java > Support Future.get with timeout in ipc async calls > -- > > Key: HADOOP-13168 > URL: https://issues.apache.org/jira/browse/HADOOP-13168 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Fix For: HDFS-9924 > > Attachments: c13168_20160517.patch, c13168_20160518.patch, > c13168_20160519.patch > > > Currently, the Future returned by ipc async call only support Future.get() > but not Future.get(timeout, unit). We should support the latter as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13238) pid handling is failing on secure datanode
Allen Wittenauer created HADOOP-13238: - Summary: pid handling is failing on secure datanode Key: HADOOP-13238 URL: https://issues.apache.org/jira/browse/HADOOP-13238 Project: Hadoop Common Issue Type: Bug Components: scripts, security Reporter: Allen Wittenauer {code} hdfs --daemon stop datanode cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or directory WARNING: pid has changed for datanode, skip deleting pid file cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or directory WARNING: daemon pid has changed for datanode, skip deleting daemon pid file {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13226) Support async call retry and failover
[ https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13226: - Fix Version/s: (was: 2.8.0) HDFS-9924 > Support async call retry and failover > - > > Key: HADOOP-13226 > URL: https://issues.apache.org/jira/browse/HADOOP-13226 > Project: Hadoop Common > Issue Type: New Feature > Components: io, ipc >Reporter: Xiaobing Zhou >Assignee: Tsz Wo Nicholas Sze > Fix For: HDFS-9924 > > Attachments: h10433_20160524.patch, h10433_20160525.patch, > h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, > h10433_20160528c.patch > > > In current Async DFS implementation, file system calls are invoked and > returns Future immediately to clients. Clients call Future#get to retrieve > final results. Future#get internally invokes a chain of callbacks residing in > ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The > callback path bypasses the original retry layer/logic designed for > synchronous DFS. This proposes refactoring to make retry also works for Async > DFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13168) Support Future.get with timeout in ipc async calls
[ https://issues.apache.org/jira/browse/HADOOP-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13168: - Fix Version/s: (was: 2.8.0) HDFS-9924 > Support Future.get with timeout in ipc async calls > -- > > Key: HADOOP-13168 > URL: https://issues.apache.org/jira/browse/HADOOP-13168 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Fix For: HDFS-9924 > > Attachments: c13168_20160517.patch, c13168_20160518.patch, > c13168_20160519.patch > > > Currently, the Future returned by ipc async call only support Future.get() > but not Future.get(timeout, unit). We should support the latter as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls
[ https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-12957: - Fix Version/s: (was: 2.8.0) HDFS-9924 > Limit the number of outstanding async calls > --- > > Key: HADOOP-12957 > URL: https://issues.apache.org/jira/browse/HADOOP-12957 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: HDFS-9924 > > Attachments: HADOOP-12957-HADOOP-12909.000.patch, > HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, > HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, > HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, > HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch > > > In async RPC, if the callers don't read replies fast enough, the buffer > storing replies could be used up. This is to propose limiting the number of > outstanding async calls to eliminate the issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13236) truncate will fail when we use viewFS.
[ https://issues.apache.org/jira/browse/HADOOP-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315185#comment-15315185 ] Brahma Reddy Battula commented on HADOOP-13236: --- [~arpiagariu] thanks for taking a look..It's {{viewfilesystem.java}} > truncate will fail when we use viewFS. > -- > > Key: HADOOP-13236 > URL: https://issues.apache.org/jira/browse/HADOOP-13236 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > truncate will fail when use viewFS. > {code} > @Override > public boolean truncate(final Path f, final long newLength) > throws IOException { > InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.targetFileSystem.truncate(f, newLength); > } > {code} > *Path should be like below:* > {{return res.targetFileSystem.truncate(f, newLength);}} *should be* > {{return res.targetFileSystem.truncate(res.remainingPath, newLength);}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315154#comment-15315154 ] Hudson commented on HADOOP-13155: - SUCCESS: Integrated in Hadoop-trunk-Commit #9911 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9911/]) HADOOP-13155. Implement TokenRenewer to renew and cancel delegation (wang: rev 713cb71820ad94a5436f35824d07aa12fcba5cc6) * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java * hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuthenticationFilter.java * hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtil.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderDelegationTokenExtension.java > Implement TokenRenewer to renew and cancel delegation tokens in KMS > --- > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0 > > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, > HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13155: - Component/s: security kms > Implement TokenRenewer to renew and cancel delegation tokens in KMS > --- > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0 > > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, > HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13155: - Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Pushed back through branch-2.8. Some small fixups for the backports. Thanks again Xiao for the patch and Allen for also commenting! > Implement TokenRenewer to renew and cancel delegation tokens in KMS > --- > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0 > > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, > HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315129#comment-15315129 ] Mingliang Liu commented on HADOOP-13105: Thank you for your review and commit, [~cnauroth], and thank you [~jojochuang] for the review and discussion. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315126#comment-15315126 ] Hudson commented on HADOOP-13105: - SUCCESS: Integrated in Hadoop-trunk-Commit #9910 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9910/]) HADOOP-13105. Support timeouts in LDAP queries in LdapGroupsMapping. (cnauroth: rev d82bc8501869be78780fc09752dbf7af918c14af) * hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315124#comment-15315124 ] Hadoop QA commented on HADOOP-12718: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 4s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch | | JIRA Issue | HADOOP-12718 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9662/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Blocker > Labels: supportability > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315117#comment-15315117 ] Andrew Wang commented on HADOOP-13155: -- LGTM +1, will commit shortly. Thanks for working on this Xiao. Could you also add a release note about the new config requirement for renewal? Also, should we do anything about this for Hadoop 3? For instance, deprecate and remove the "dfs._" key in favor of the "hadoop._" key? > Implement TokenRenewer to renew and cancel delegation tokens in KMS > --- > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, > HADOOP-13155.06.patch, HADOOP-13155.07.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13105: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have committed this to trunk, branch-2 and branch-2.8. [~liuml07], thank you for the patch. [~jojochuang], thank you for helping with code review. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315107#comment-15315107 ] Hadoop QA commented on HADOOP-12718: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch | | JIRA Issue | HADOOP-12718 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9661/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Blocker > Labels: supportability > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315099#comment-15315099 ] Hadoop QA commented on HADOOP-13105: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 6m 52s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 51s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 49s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808081/HADOOP-13105.004.patch | | JIRA Issue | HADOOP-13105 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 8dc662e623c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 78b3a03 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9660/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9660/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13105.000.patch,
[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-12718: --- Target Version/s: 2.8.0 Priority: Blocker (was: Minor) Hadoop Flags: (was: Reviewed) Fix Version/s: (was: 2.8.0) > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Blocker > Labels: supportability > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth reopened HADOOP-12718: I'm going to reopen the issue now and attach a patch to do the revert. This is just the outcome of running {{git revert --no-commit 97056c3355810a803f07baca89b89e2bf6bb7201}}. It has been a while since this patch was committed though, so I'd like to take the revert through pre-commit as a precaution. I'm also going to temporarily raise this to blocker for 2.8.0, just so we don't forget to complete the revert before that release. > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Labels: supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-12718: --- Attachment: HADOOP-12718.004.patch > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Labels: supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-12718: --- Status: Patch Available (was: Reopened) > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Labels: supportability > Fix For: 2.8.0 > > Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, > HADOOP-12718.003.patch, HADOOP-12718.004.patch, > TestFsShellCopyPermission-output.001.txt, > TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch > > > When the user doesn't have access permission to the local directory, the > "hadoop fs -put" command prints a confusing error message "No such file or > directory". > {noformat} > $ whoami > systest > $ cd /home/systest > $ ls -ld . > drwx--. 4 systest systest 4096 Jan 13 14:21 . > $ mkdir d1 > $ sudo -u hdfs hadoop fs -put d1 /tmp > put: `d1': No such file or directory > {noformat} > It will be more informative if the message is: > {noformat} > put: d1 (Permission denied) > {noformat} > If the source is a local file, the error message is ok: > {noformat} > put: f1 (Permission denied) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13109) Add ability to edit existing token file via dtutil -alias flag
[ https://issues.apache.org/jira/browse/HADOOP-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315049#comment-15315049 ] Hudson commented on HADOOP-13109: - SUCCESS: Integrated in Hadoop-trunk-Commit #9909 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9909/]) HADOOP-13109. Add ability to edit existing token file via dtutil -alias (aw: rev 78b3a038319cb351632250279f171b756c7f24b0) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestDtUtilShell.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java > Add ability to edit existing token file via dtutil -alias flag > --- > > Key: HADOOP-13109 > URL: https://issues.apache.org/jira/browse/HADOOP-13109 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13109.01.patch, HADOOP-13109.02.patch > > > The first iteration of the dtutil command > (org.apache.hadoop.security.token.DtUtilShell) did not provide an operation > to edit the service field on a token that has already been fetched to a file. > This is a necessary feature for users who have copied a token file out of a > cluster that does not support the dtutil command (hence, the token could not > be aliased during get). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13105: --- Affects Version/s: (was: 3.0.0-alpha1) Target Version/s: 2.8.0 Hadoop Flags: Reviewed +1 for patch 004, pending pre-commit run. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13105: --- Attachment: HADOOP-13105.004.patch Thank you [~cnauroth] for your comment. Sorry I was not aware of the {{final}} variable problem when sharing with a nested thread in Java 7. I should have fixed the problem if I had the chance to build against {{branch-2}}. I was spoiled by the Java 8 and especially IntelliJ IDE. The v4 patch simply added two "final" keyword to the {{finLatch}} varaible in the test. I tested the patch on both trunk (Java 8) and trunk (Java 7), and it looked good. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.0.0-alpha1 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch, HADOOP-13105.004.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13109) Add ability to edit existing token file via dtutil -alias flag
[ https://issues.apache.org/jira/browse/HADOOP-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13109: -- Component/s: security > Add ability to edit existing token file via dtutil -alias flag > --- > > Key: HADOOP-13109 > URL: https://issues.apache.org/jira/browse/HADOOP-13109 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13109.01.patch, HADOOP-13109.02.patch > > > The first iteration of the dtutil command > (org.apache.hadoop.security.token.DtUtilShell) did not provide an operation > to edit the service field on a token that has already been fetched to a file. > This is a necessary feature for users who have copied a token file out of a > cluster that does not support the dtutil command (hence, the token could not > be aliased during get). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13109) Add ability to edit existing token file via dtutil -alias flag
[ https://issues.apache.org/jira/browse/HADOOP-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13109: -- Resolution: Fixed Fix Version/s: 3.0.0-alpha1 Status: Resolved (was: Patch Available) > Add ability to edit existing token file via dtutil -alias flag > --- > > Key: HADOOP-13109 > URL: https://issues.apache.org/jira/browse/HADOOP-13109 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13109.01.patch, HADOOP-13109.02.patch > > > The first iteration of the dtutil command > (org.apache.hadoop.security.token.DtUtilShell) did not provide an operation > to edit the service field on a token that has already been fetched to a file. > This is a necessary feature for users who have copied a token file out of a > cluster that does not support the dtutil command (hence, the token could not > be aliased during get). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13109) Add ability to edit existing token file via dtutil -alias flag
[ https://issues.apache.org/jira/browse/HADOOP-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15315003#comment-15315003 ] Allen Wittenauer commented on HADOOP-13109: --- Hindsight 20/20, but I'm sort of regretting how the term 'alias' was used in dtutil. Oh well. +1. Thanks! I'll commit this trunk here in a bit. > Add ability to edit existing token file via dtutil -alias flag > --- > > Key: HADOOP-13109 > URL: https://issues.apache.org/jira/browse/HADOOP-13109 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Attachments: HADOOP-13109.01.patch, HADOOP-13109.02.patch > > > The first iteration of the dtutil command > (org.apache.hadoop.security.token.DtUtilShell) did not provide an operation > to edit the service field on a token that has already been fetched to a file. > This is a necessary feature for users who have copied a token file out of a > cluster that does not support the dtutil command (hence, the token could not > be aliased during get). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314974#comment-15314974 ] Chris Nauroth commented on HADOOP-13105: [~liuml07], patch 003 looks good. I just have one more request. In the tests, please declare {{finLatch}} as {{final}}. Without that, the patch will cause compilation to fail on branch-2 with the errors shown below. This didn't show up in pre-commit, because pre-commit ran against trunk, which is building with Java 8. In Java 8, they have introduced the concept of "effectively final" variables, which means that Java 8 auto-detected that those variables are final, because they were assigned only once. {code} [ERROR] /Users/chris/git/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java:[232,15] local variable finLatch is accessed from within inner class; needs to be declared final [ERROR] /Users/chris/git/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java:[288,15] local variable finLatch is accessed from within inner class; needs to be declared final {code} > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.0.0-alpha1 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch, > HADOOP-13105.002.patch, HADOOP-13105.003.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314915#comment-15314915 ] Chris Nauroth commented on HADOOP-13203: # The comment "In case this is set to contentLength, expect lots of connection closes with abort..." is not entirely accurate. I see how this is true for usage that seeks backward, but it's not true for usage that seeks forward a lot, as demonstrated during the HADOOP-13028 review. (More on this topic below.) # Would you please revert the change in {{S3AInputStream#setReadahead}}? This is a public API, and the contract of that API is defined in interface {{CanSetReadahead}}. It states that callers are allowed to pass {{null}} to reset the read-ahead to its default value. This matches the behavior implemented by HDFS. The logic currently in S3A implements it correctly, but with this patch applied, it would cause a {{NullPointerException}} if a caller passed {{null}}. # In {{TestS3AInputStreamPerformance}}, I see why these changes were required to make the tests pass, but it highlights that this change partly reverts what was achieved in HADOOP-13028 to minimize reopens on forward seeks. Before this patch, {{testReadAheadDefault}} generated 1 open. After applying the patch, I see it generating 343 opens. It seems we can't fully optimize forward seek without harming backwards seek due to the unintended aborts. I suppose one option would be to introduce an optional advice API, similar to calling {{fadvise(FADV_SEQUENTIAL)}} that forward-seeking applications could call. That would be a much bigger change though. I don't see a way to achieve anything better right now, although it's probably good that you changed {{closeStream}} to consider read-ahead instead of the old {{CLOSE_THRESHOLD}} to determine whether or not to abort. Steve, do you have any further thoughts on this? > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314883#comment-15314883 ] Jing Zhao commented on HADOOP-13227: Thanks a lot for the great work, [~szetszwo]! The patch looks good to me. Some minor comments: # In RetryInfo#newRetryInfo, looks like failover, fail, and retry are mutual exclusive? Then can we simplify RetryInfo and only keep one RetryAction there? # In AsyncCallHandler, the queue will be accessed by many threads. Therefore maybe we should consider directly using {{ConcurrentLinkedQueue}} which utilizes an efficient non-block algorithm. # In {{checkCalls}}, do you think we can avoid the poll+offer operations for a not-done-yet call? > AsyncCallHandler should use a event driven architecture to handle async calls > - > > Key: HADOOP-13227 > URL: https://issues.apache.org/jira/browse/HADOOP-13227 > Project: Hadoop Common > Issue Type: Improvement > Components: io, ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: c13227_20160602.patch > > > This JIRA is to address [Jing's > comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630] > in HADOOP-13226. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts
[ https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314871#comment-15314871 ] Vinod Kumar Vavilapalli commented on HADOOP-12687: -- bq. But in current case, with patch, direct look-up is being done after all check is done including trailing dot and search domains. Is it still a RFC violation to lookup for direct host? bq. Anyone can confirm this? That's be a question for [~vvasudev] / [~sunilg] / [~rohithsharma]. Folks, can we please get a consensus here? This issue is plaguing way too many JIRAs. > SecureUtil#getByName should also try to resolve direct hostname, incase > multiple loopback addresses are present in /etc/hosts > - > > Key: HADOOP-12687 > URL: https://issues.apache.org/jira/browse/HADOOP-12687 > Project: Hadoop Common > Issue Type: Bug >Reporter: Junping Du >Assignee: Sunil G >Priority: Blocker > Labels: security > Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, > 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch > > > From > https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt, > we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get > timeout which can be reproduced locally. > When {{/etc/hosts}} has multiple loopback entries, > {{InetAddress.getByName(null)}} will be returning the first entry present in > etc/hosts. Hence its possible that machine hostname can be second in list and > cause {{UnKnownHostException}}. > Suggesting a direct resolve for such hostname scenarios. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314812#comment-15314812 ] Chris Nauroth commented on HADOOP-13237: This looks to me like {{AnonymousAWSCredentials}} is fundamentally unusable in a {{AWSCredentialsProviderChain}}. The {{AnonymousAWSCredentials}} is hard-coded to return a null key and secret. https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AnonymousAWSCredentials.java#L26-L38 However, the chain is coded to throw an exception if it walks the whole chain and can't find a non-null key and secret. https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/auth/AWSCredentialsProviderChain.java#L108-L132 I'd be curious if it works when you swap out the {{credentials = new AWSCredentialsProviderChain(...)}} line for a straight call to {{credentials = new AnonymousAWSCredentialsProvider()}}. If it does, then I think this could be interpreted as a bug in the AWS SDK, and we might consider filing a patch to that project. In the absence of AWS SDK changes, we could have a configuration property like {{fs.s3a.anonymous.access}}, which if true would skip the chain and just create the anonymous provider. Actually, it might be good for anonymous access to be opt-in via configuration anyway, because I expect most deployments wouldn't want anonymous access and would prefer to fail fast so they know to lock down their bucket. > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314779#comment-15314779 ] Steve Loughran commented on HADOOP-13237: - Should we maybe be more relaxed about failures of verifying a bucket exists on startup? I'll try and experiment with downgrading to a warn and seeing what happens to a test run. Irony: we never see this problem in hadoop-aws test runs, because they only run if you have credentials. > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314772#comment-15314772 ] Steve Loughran commented on HADOOP-13237: - stack {code} 16/06/03 21:40:37 INFO BlockManagerMasterEndpoint: Registering block manager localhost:60011 with 511.1 MB RAM, BlockManagerId(driver, localhost, 60011) 16/06/03 21:40:37 INFO BlockManagerMaster: Registered BlockManager 16/06/03 21:40:39 ERROR S3ALineCount: Failed to execute line count org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain: Unable to load AWS credentials from any provider in the chain at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:82) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:300) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:267) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2830) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2812) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.spark.cloud.s3.examples.S3ALineCount$.innerMain(S3ALineCount.scala:75) at org.apache.spark.cloud.s3.examples.S3ALineCount$.main(S3ALineCount.scala:50) at org.apache.spark.cloud.s3.examples.S3ALineCount.main(S3ALineCount.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3779) at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1107) at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1070) at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:288) ... 18 more {code} > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
Steve Loughran created HADOOP-13237: --- Summary: s3a initialization against public bucket fails if caller lacks any credentials Key: HADOOP-13237 URL: https://issues.apache.org/jira/browse/HADOOP-13237 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steve Loughran Assignee: Steve Loughran If an S3 bucket is public, anyone should be able to read from it. However, you cannot create an s3a client bonded to a public bucket unless you have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314715#comment-15314715 ] Anu Engineer commented on HADOOP-12291: --- {code} if (goUpHierarchy > 0 && !isPosix) { {code} Why did we add !isPosix ? is this something that you discovered in testing ? I don't see that in the last patch. Not that it is an issue, more of a question for my own understanding. > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls
[ https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314599#comment-15314599 ] stack commented on HADOOP-12910: Back now [~cnauroth] bq. ...whereas the scope of this issue has focused on asynchronous NameNode metadata operations The discussion in here is more broad than just NN metadata operations. The summary and description would seem to encourage how we will add async to FileSystem generally. It seems like a good thing to nail given async is coming up in a couple of areas ([~Apache9]'s file ops and these NN calls). They should all align on their approach I'd say. Regards a writeup, [~Apache9] has revived HDFS-916 and added doc on what is wanted doing async file ops. High-level, we want to be able to consume an HDFS API async, in an event-driven way. A radical experiment that totally replaces dfsclient with a simplified, bare-bones implementation that does the minimal subset necessary for writing HBase WALs (HBASE-14790) allows us write much faster while using less resources. The implementation also does fan-out rather than pipeline. This put together with it being barebones -- e.g. we do not want to trigger pipeline recovery, it takes too long, if it works -- muddies the compare but the general drift is plain. > Add new FileSystem API to support asynchronous method calls > --- > > Key: HADOOP-12910 > URL: https://issues.apache.org/jira/browse/HADOOP-12910 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: HADOOP-12910-HDFS-9924.000.patch, > HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch > > > Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a > better name). All the APIs in FutureFileSystem are the same as FileSystem > except that the return type is wrapped by Future, e.g. > {code} > //FileSystem > public boolean rename(Path src, Path dst) throws IOException; > //FutureFileSystem > public Future rename(Path src, Path dst) throws IOException; > {code} > Note that FutureFileSystem does not extend FileSystem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314579#comment-15314579 ] Anu Engineer commented on HADOOP-12291: --- +1, (Non-binding). Thanks for for updating the patch. Changes look good to me. > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl
[ https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314521#comment-15314521 ] Dave Marion commented on HADOOP-13225: -- [~jzhuge] Now that the scope has expanded I don't know that I will have time and the ability to test this. If you have the cycles and the ability to do this, please take it. > Allow java to be started with numactl > - > > Key: HADOOP-13225 > URL: https://issues.apache.org/jira/browse/HADOOP-13225 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Dave Marion >Assignee: Dave Marion > Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, > HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch > > > Allow numactl constraints to be applied to the datanode process. The > implementation I have in mind involves two environment variables (enable and > parameters) in the datanode startup process. Basically, if enabled and > numactl exists on the system, then start the java process using it. Provide a > default set of parameters, and allow the user to override the default. Wiring > this up for the non-jsvc use case seems straightforward. Not sure how this > can be supported using jsvc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314475#comment-15314475 ] Esther Kundin commented on HADOOP-12291: The test failures look unrelated to my update. > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314454#comment-15314454 ] Hadoop QA commented on HADOOP-12291: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 37 unchanged - 3 fixed = 37 total (was 40) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 34s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 25s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | | | hadoop.security.ssl.TestReloadingX509TrustManager | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808012/HADOOP-12291.007.patch | | JIRA Issue | HADOOP-12291 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux be535bca8b13 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c58a59f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9659/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9659/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9659/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9659/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314402#comment-15314402 ] Chris Nauroth commented on HADOOP-13203: Rajesh, thank you for the further explanation. Sorry for my earlier confusion. I was misinterpreting the word "abort" to mean something happening at the TCP layer, e.g. an RST packet sent from the S3 back-end. Now I understand that we're really talking about our own abort logic in {{S3AInputStream#closeStream}}. Now that I understand the goal of this change, I can code review it. I'll try to do that later today (PST). > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314405#comment-15314405 ] Hudson commented on HADOOP-13171: - SUCCESS: Integrated in Hadoop-trunk-Commit #9904 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9904/]) HADOOP-13171. Add StorageStatistics to S3A; instrument some more (cnauroth: rev c58a59f7081d55dd2108545ebf9ee48cf43ca944) * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/ProgressableProgressListener.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AOutputStream.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java * hadoop-tools/hadoop-aws/src/test/resources/log4j.properties * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADirectoryPerformance.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3AInputStreamPerformance.java * hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFileOperationCost.java * hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13171-014.patch, HADOOP-13171-016.patch, > HADOOP-13171-branch-2-001.patch, HADOOP-13171-branch-2-002.patch, > HADOOP-13171-branch-2-003.patch, HADOOP-13171-branch-2-004.patch, > HADOOP-13171-branch-2-005.patch, HADOOP-13171-branch-2-006.patch, > HADOOP-13171-branch-2-007.patch, HADOOP-13171-branch-2-008.patch, > HADOOP-13171-branch-2-009.patch, HADOOP-13171-branch-2-010.patch, > HADOOP-13171-branch-2-011.patch, HADOOP-13171-branch-2-012.patch, > HADOOP-13171-branch-2-013.patch, HADOOP-13171-branch-2-015.patch, > HADOOP-13171-branch-2.8-017.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esther Kundin updated HADOOP-12291: --- Status: In Progress (was: Patch Available) > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esther Kundin updated HADOOP-12291: --- Attachment: HADOOP-12291.007.patch > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esther Kundin updated HADOOP-12291: --- Status: Patch Available (was: In Progress) > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13171: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have committed this to trunk, branch-2 and branch-2.8. Steve, thank you for the patch. > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13171-014.patch, HADOOP-13171-016.patch, > HADOOP-13171-branch-2-001.patch, HADOOP-13171-branch-2-002.patch, > HADOOP-13171-branch-2-003.patch, HADOOP-13171-branch-2-004.patch, > HADOOP-13171-branch-2-005.patch, HADOOP-13171-branch-2-006.patch, > HADOOP-13171-branch-2-007.patch, HADOOP-13171-branch-2-008.patch, > HADOOP-13171-branch-2-009.patch, HADOOP-13171-branch-2-010.patch, > HADOOP-13171-branch-2-011.patch, HADOOP-13171-branch-2-012.patch, > HADOOP-13171-branch-2-013.patch, HADOOP-13171-branch-2-015.patch, > HADOOP-13171-branch-2.8-017.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13236) truncate will fail when we use viewFS.
[ https://issues.apache.org/jira/browse/HADOOP-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314333#comment-15314333 ] Arpit Agarwal commented on HADOOP-13236: Hi [~brahmareddy], which file/branch is that code snippet from? I see the following in ViewFs.java in trunk which looks correct. {code} @Override public boolean truncate(final Path f, final long newLength) throws AccessControlException, FileNotFoundException, UnresolvedLinkException, IOException { InodeTree.ResolveResult res = fsState.resolve(getUriPath(f), true); return res.targetFileSystem.truncate(res.remainingPath, newLength); } {code} > truncate will fail when we use viewFS. > -- > > Key: HADOOP-13236 > URL: https://issues.apache.org/jira/browse/HADOOP-13236 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > truncate will fail when use viewFS. > {code} > @Override > public boolean truncate(final Path f, final long newLength) > throws IOException { > InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.targetFileSystem.truncate(f, newLength); > } > {code} > *Path should be like below:* > {{return res.targetFileSystem.truncate(f, newLength);}} *should be* > {{return res.targetFileSystem.truncate(res.remainingPath, newLength);}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12709) Deprecate s3:// in branch-2,; cut from trunk
[ https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314330#comment-15314330 ] Chris Nauroth commented on HADOOP-12709: Once again, the failure in {{TestDNS}} is unrelated. Otherwise, pre-commit looks clean. > Deprecate s3:// in branch-2,; cut from trunk > > > Key: HADOOP-12709 > URL: https://issues.apache.org/jira/browse/HADOOP-12709 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Mingliang Liu > Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, > HADOOP-12709.002.patch, HADOOP-12709.003.patch, HADOOP-12709.004.patch, > HADOOP-12709.005.patch > > > The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* > shows that it's not being used. while invaluable at the time, s3n and > especially s3a render it obsolete except for reading existing data. > I propose > # Mark Java source as {{@deprecated}} > # Warn the first time in a JVM that an S3 instance is created, "deprecated > -will be removed in future releases" > # in Hadoop trunk we really cut it. Maybe have an attic project (external?) > which holds it for anyone who still wants it. Or: retain the code but remove > the {{fs.s3.impl}} config option, so you have to explicitly add it for use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl
[ https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314277#comment-15314277 ] John Zhuge commented on HADOOP-13225: - [~dlmarion] Will you continue to work this jira? If not, do you mind if I take over? > Allow java to be started with numactl > - > > Key: HADOOP-13225 > URL: https://issues.apache.org/jira/browse/HADOOP-13225 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Dave Marion >Assignee: Dave Marion > Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, > HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch > > > Allow numactl constraints to be applied to the datanode process. The > implementation I have in mind involves two environment variables (enable and > parameters) in the datanode startup process. Basically, if enabled and > numactl exists on the system, then start the java process using it. Provide a > default set of parameters, and allow the user to override the default. Wiring > this up for the non-jsvc use case seems straightforward. Not sure how this > can be supported using jsvc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314157#comment-15314157 ] Steve Loughran commented on HADOOP-13171: - Chris: I've tested the patch 017 against S3 ireland, all is well > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-014.patch, HADOOP-13171-016.patch, > HADOOP-13171-branch-2-001.patch, HADOOP-13171-branch-2-002.patch, > HADOOP-13171-branch-2-003.patch, HADOOP-13171-branch-2-004.patch, > HADOOP-13171-branch-2-005.patch, HADOOP-13171-branch-2-006.patch, > HADOOP-13171-branch-2-007.patch, HADOOP-13171-branch-2-008.patch, > HADOOP-13171-branch-2-009.patch, HADOOP-13171-branch-2-010.patch, > HADOOP-13171-branch-2-011.patch, HADOOP-13171-branch-2-012.patch, > HADOOP-13171-branch-2-013.patch, HADOOP-13171-branch-2-015.patch, > HADOOP-13171-branch-2.8-017.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15314015#comment-15314015 ] Hadoop QA commented on HADOOP-13227: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 25s {color} | {color:red} root: The patch generated 5 new + 213 unchanged - 1 fixed = 218 total (was 214) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 31s {color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 35s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 109m 55s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce() calls Thread.sleep() with a lock held At RetryInvocationHandler.java:lock held At RetryInvocationHandler.java:[line 107] | | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12807818/c13227_20160602.patch | | JIRA Issue | HADOOP-13227 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 44459a3170c4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 97e2449 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Moved] (HADOOP-13236) truncate will fail when we use viewFS.
[ https://issues.apache.org/jira/browse/HADOOP-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula moved HDFS-10482 to HADOOP-13236: -- Key: HADOOP-13236 (was: HDFS-10482) Project: Hadoop Common (was: Hadoop HDFS) > truncate will fail when we use viewFS. > -- > > Key: HADOOP-13236 > URL: https://issues.apache.org/jira/browse/HADOOP-13236 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > truncate will fail when use viewFS. > {code} > @Override > public boolean truncate(final Path f, final long newLength) > throws IOException { > InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.targetFileSystem.truncate(f, newLength); > } > {code} > *Path should be like below:* > {{return res.targetFileSystem.truncate(f, newLength);}} *should be* > {{return res.targetFileSystem.truncate(res.remainingPath, newLength);}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-13227: - Status: Patch Available (was: In Progress) > AsyncCallHandler should use a event driven architecture to handle async calls > - > > Key: HADOOP-13227 > URL: https://issues.apache.org/jira/browse/HADOOP-13227 > Project: Hadoop Common > Issue Type: Improvement > Components: io, ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: c13227_20160602.patch > > > This JIRA is to address [Jing's > comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630] > in HADOOP-13226. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13235) Use Date and Time API in KafkaSink
[ https://issues.apache.org/jira/browse/HADOOP-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313882#comment-15313882 ] Hadoop QA commented on HADOOP-13235: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s {color} | {color:green} hadoop-kafka in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 29s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12807913/HADOOP-13235.01.patch | | JIRA Issue | HADOOP-13235 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1620ed4a8226 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 97e2449 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9657/testReport/ | | modules | C: hadoop-tools/hadoop-kafka U: hadoop-tools/hadoop-kafka | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9657/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use Date and Time API in KafkaSink > -- > > Key: HADOOP-13235 > URL: https://issues.apache.org/jira/browse/HADOOP-13235 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: jdk8 > Attachments: HADOOP-13235.01.patch > > > We can use Date and Time API (JSR-310) in trunk code. -- This message was sent
[jira] [Work started] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files
[ https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-11601 started by Steve Loughran. --- > Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty > files > --- > > Key: HADOOP-11601 > URL: https://issues.apache.org/jira/browse/HADOOP-11601 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, test >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > HADOOP-11584 has shown that the contract tests are not validating that > {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition > workload correctly. > Clarify in text and add test to do this. Test MUST be designed to work > against eventually consistent filesystems where {{getFileStatus()}} may not > be immediately visible, by retrying operation if FS declares it is an object > store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files
[ https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11601: Status: Open (was: Patch Available) > Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty > files > --- > > Key: HADOOP-11601 > URL: https://issues.apache.org/jira/browse/HADOOP-11601 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, test >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: BB2015-05-TBR > Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > HADOOP-11584 has shown that the contract tests are not validating that > {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition > workload correctly. > Clarify in text and add test to do this. Test MUST be designed to work > against eventually consistent filesystems where {{getFileStatus()}} may not > be immediately visible, by retrying operation if FS declares it is an object > store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13235) Use Date and Time API in KafkaSink
[ https://issues.apache.org/jira/browse/HADOOP-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-13235: --- Status: Patch Available (was: Open) > Use Date and Time API in KafkaSink > -- > > Key: HADOOP-13235 > URL: https://issues.apache.org/jira/browse/HADOOP-13235 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: jdk8 > Attachments: HADOOP-13235.01.patch > > > We can use Date and Time API (JSR-310) in trunk code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13235) Use Date and Time API in KafkaSink
[ https://issues.apache.org/jira/browse/HADOOP-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reassigned HADOOP-13235: -- Assignee: Akira AJISAKA > Use Date and Time API in KafkaSink > -- > > Key: HADOOP-13235 > URL: https://issues.apache.org/jira/browse/HADOOP-13235 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: jdk8 > Attachments: HADOOP-13235.01.patch > > > We can use Date and Time API (JSR-310) in trunk code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13235) Use Date and Time API in KafkaSink
[ https://issues.apache.org/jira/browse/HADOOP-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-13235: --- Attachment: HADOOP-13235.01.patch 01 patch: * Use Date and Time API in KafkaSink * Re-use DateTimeFormatter * Fixes format "hh"(1-12) -> "HH" (0-23) > Use Date and Time API in KafkaSink > -- > > Key: HADOOP-13235 > URL: https://issues.apache.org/jira/browse/HADOOP-13235 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira AJISAKA >Priority: Minor > Labels: jdk8 > Attachments: HADOOP-13235.01.patch > > > We can use Date and Time API (JSR-310) in trunk code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13235) Use Date and Time API in KafkaSink
Akira AJISAKA created HADOOP-13235: -- Summary: Use Date and Time API in KafkaSink Key: HADOOP-13235 URL: https://issues.apache.org/jira/browse/HADOOP-13235 Project: Hadoop Common Issue Type: Improvement Reporter: Akira AJISAKA Priority: Minor We can use Date and Time API (JSR-310) in trunk code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org