[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403402#comment-15403402 ] Chris Nauroth commented on HADOOP-13403: Yes, I agree about not using futures. If available memory is so constrained that the JVM can't run the {{ThreadPoolExecutor}} constructor, then it's really already a lost cause. At that point, the {{OutOfMemoryError}} could come from just about any line of code. Even if we manage to fall back to serial execution after that, we'll likely either get another {{OutOfMemoryError}} or the JVM will be unresponsive due to GC churn. {{OutOfMemoryError}} generally is not something recoverable, no matter how hard you try. I cannot think of any possible error condition from the {{getThreadPoolExecutor}} method that can be recovered reasonably. If you really think it's important to keep this, then please comment that this is defensive coding despite the fact that there are no possible known recoverable error conditions right now. > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.7.2 >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Fix For: 2.9.0 > > Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, > HADOOP-13403-003.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files processed by each thread > d) Total time taken for the operation > Failure Scenarios: > Failure to queue a thread execute request shouldn’t be an issue if we > can ensure at least one thread has completed execution successfully. If we > couldn't schedule one thread then we should take serialization path. > Exceptions raised while executing threads are still considered regular > exceptions and returned to client as operation failed. Exceptions raised > while stopping threads and deleting thread pool shouldn't can be ignored if > operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403379#comment-15403379 ] shimingfei commented on HADOOP-12756: - [~drankye] I have rebased the patch against latest HADOOP-12756 branch > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.007.patch, HADOOP-12756.008.patch, HCFS User manual.md, OSS > integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403378#comment-15403378 ] shimingfei commented on HADOOP-12756: - Thanks Kai I have updated the patch. HADOOP-12756.007.patch Thanks Mingfei > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.007.patch, HADOOP-12756.008.patch, HCFS User manual.md, OSS > integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403375#comment-15403375 ] Duo Zhang commented on HADOOP-13433: Some progress... I tried to write a UT by moving TGT to the last of the private credentials manually, and the service ticket is sent to KDC as expected when creating a SaslClient. But our MiniKdc does not check the prefix of a TGT so there is no error... > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Duo Zhang > > This is a problem that has troubled us for several years. For our HBase > cluster, sometimes the RS will be stuck due to > {noformat} > 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception > encountered while connecting to the server : > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: The ticket > isn't for us (35) - BAD TGS SERVER NAME)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) > at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) > at org.apache.hadoop.hbase.security.User.call(User.java:607) > at org.apache.hadoop.hbase.security.User.access$700(User.java:51) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321) > at > org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164) > at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004) > at > org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107) > at $Proxy24.replicateLogEntries(Unknown Source) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515) > Caused by: GSSException: No valid credentials provided (Mechanism level: The > ticket isn't for us (35) - BAD TGS SERVER NAME) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180) > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175) > ... 23 more > Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64) > at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185) > at > sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294) > at > sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106) > at > sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594) > ... 26 more > Caused by: KrbException: Identifier doesn't match expected value (906) > at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133) > at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58) > at sun.security.krb5.internal.TGSRep.(TGSRep.java:53) > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46) > ... 31 more > {noformat} > It rarely
[jira] [Resolved] (HADOOP-13456) Just for watching
[ https://issues.apache.org/jira/browse/HADOOP-13456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey resolved HADOOP-13456. -- Resolution: Invalid please send general questions to the user@hadoop mailing list. > Just for watching > - > > Key: HADOOP-13456 > URL: https://issues.apache.org/jira/browse/HADOOP-13456 > Project: Hadoop Common > Issue Type: Bug >Reporter: syannn > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shimingfei updated HADOOP-12756: Attachment: HADOOP-12756.007.patch > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.007.patch, HADOOP-12756.008.patch, HCFS User manual.md, OSS > integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shimingfei updated HADOOP-12756: Attachment: (was: HADOOP-12756.007.patch) > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.008.patch, HCFS User manual.md, OSS integration.pdf, OSS > integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13456) Just for watching
syannn created HADOOP-13456: --- Summary: Just for watching Key: HADOOP-13456 URL: https://issues.apache.org/jira/browse/HADOOP-13456 Project: Hadoop Common Issue Type: Bug Reporter: syannn -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403276#comment-15403276 ] Subramanyam Pattipaka commented on HADOOP-13403: [~cnauroth], I have reproduced the test issue on Linux machines. On windows machines, these tests are passing. I will fix these tests on Linux machine in next patch. > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.7.2 >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Fix For: 2.9.0 > > Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, > HADOOP-13403-003.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files processed by each thread > d) Total time taken for the operation > Failure Scenarios: > Failure to queue a thread execute request shouldn’t be an issue if we > can ensure at least one thread has completed execution successfully. If we > couldn't schedule one thread then we should take serialization path. > Exceptions raised while executing threads are still considered regular > exceptions and returned to client as operation failed. Exceptions raised > while stopping threads and deleting thread pool shouldn't can be ignored if > operation all files are done with out any issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding
[ https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403204#comment-15403204 ] Sean Busbey commented on HADOOP-13344: -- it would be easier (for me at least) to comment on a specific patch rather than a description. at first blush, it doesn't sound like your proposal changes any of the declared dependencies within maven. If that's correct, then downstream users will still pull in the dependency when they shouldn't. > Add option to exclude Hadoop's SLF4J binding > > > Key: HADOOP-13344 > URL: https://issues.apache.org/jira/browse/HADOOP-13344 > Project: Hadoop Common > Issue Type: New Feature > Components: bin, scripts >Affects Versions: 2.8.0, 2.7.2 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HADOOP-13344.patch > > > If another application that uses the Hadoop classpath brings in its own SLF4J > binding for logging, and that jar is not the exact same as the one brought in > by Hadoop, then there will be a conflict between logging jars between the two > classpaths. This patch introduces an optional setting to remove Hadoop's > SLF4J binding from the classpath, to get rid of this problem. > This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure > has been changed in 3.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403181#comment-15403181 ] Subramanyam Pattipaka commented on HADOOP-13403: [~cnauroth], Thanks for your comments. I will update with more comments on executeParallel and generate another patch. I had refactored code for both delete and rename operations to use single interface. Rename already has array and the array is already used at other locations. If we use ConcurrentLinkedQueue and remove contents from it then after executeParallel call, there won't be any entries in the queue. In future, if we need this array contents for reuse then we have to regenerate the list of files. If use array, we do the job with out loosing entries and can be useful for other cases in future. Regarding futures, I hope you agree to keep current pattern and not to use futures. Regarding getThreadPool, we are doing new operation. This can potentially result in OutOfMemoryException if we give very large value as input. This could especially happen even for little big number if the current thread has already reached the maximum heap size due to object object allocations like fileMeataData array[]. Even if we think of restricting this to a max value like 1024 then still OutOfmemory can cause with remote possibility. Currently, I couldn't think of other scenario, but don't want to realize it later which can make operation fail. Instead, for any kind of exceptions raised as part of new ThreadPoolExecutor() operation, we want to take serial path. I have already included checks for basic cases like check for threadCount > 1 after going through user configurations etc.. This is extra safety check on top of that. I ran tests and all of them are passing. Can you please provide details on what errors you are seeing? > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.7.2 >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Fix For: 2.9.0 > > Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, > HADOOP-13403-003.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403172#comment-15403172 ] Akira Ajisaka commented on HADOOP-13444: and thanks [~busbey] for the review. > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13444: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed this to branch-2 and branch-2.8. Thanks [~vincentpoon] for the contribution. I'll review HDFS-10707 next. > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403167#comment-15403167 ] Akira Ajisaka commented on HADOOP-13444: +1, thanks Vincent. > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters
[ https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403140#comment-15403140 ] Junping Du commented on HADOOP-13362: - bq. 2.7.3 was in lockdown for only critical blockers that were preventing it from being released, so I just committed it to branch-2.7. I see. We do see this happen a lot recently in some clusters with enabling container metrics. So I wish we could have it in 2.7.3. bq. I'm OK with it going into 2.7.3 if there's still a chance to do so, assuming the 2.7.3 release manager is also OK with it. Agree. Even this should not block 2.7.3 release, but we should try to get it in if by any chance. [~vinodkv], I saw you leave comments in 2.7.3-rc0 voting thread that rc0 will be withdrawn. Can you confirm this patch can be backport to 2.7.3? Thanks! > DefaultMetricsSystem leaks the source name when a source unregisters > > > Key: HADOOP-13362 > URL: https://issues.apache.org/jira/browse/HADOOP-13362 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Junping Du >Priority: Critical > Fix For: 2.7.4 > > Attachments: HADOOP-13362-branch-2.7.patch > > > Ran across a nodemanager that was spending most of its time in GC. Upon > examination of the heap most of the memory was going to the map of names in > org.apache.hadoop.metrics2.lib.UniqueNames. In this case the map had almost > 2 million entries. Looking at a few of the map showed entries like > "ContainerResource_container_e01_1459548490386_8560138_01_002020", > "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc. > Looks like the ContainerMetrics for each container will cause a unique name > to be registered with UniqueNames and the name will never be unregistered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403115#comment-15403115 ] Hadoop QA commented on HADOOP-13081: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821458/HADOOP-13081.03.patch | | JIRA Issue | HADOOP-13081 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 407a418a9aab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f473cf | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10147/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, > HADOOP-13081.patch > > >
[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument
[ https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403094#comment-15403094 ] Luke Lu commented on HADOOP-12747: -- Hey [~sjlee0], this is a very convenient feature. The patch looks good to me. +1. > support wildcard in libjars argument > > > Key: HADOOP-12747 > URL: https://issues.apache.org/jira/browse/HADOOP-12747 > Project: Hadoop Common > Issue Type: New Feature > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee > Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, > HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, > HADOOP-12747.06.patch, HADOOP-12747.07.patch > > > There is a problem when a user job adds too many dependency jars in their > command line. The HADOOP_CLASSPATH part can be addressed, including using > wildcards (\*). But the same cannot be done with the -libjars argument. Today > it takes only fully specified file paths. > We may want to consider supporting wildcards as a way to help users in this > situation. The idea is to handle it the same way the JVM does it: \* expands > to the list of jars in that directory. It does not traverse into any child > directory. > Also, it probably would be a good idea to do it only for libjars (i.e. don't > do it for -files and -archives). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403085#comment-15403085 ] Chris Nauroth commented on HADOOP-13446: Yes, using the local DynamoDB absolutely makes sense. +1. FWIW, I've put in a feature request to the AWS SDK team to include a similar local S3 simulator. They say it's a fairly common feature request, though it's never quite bubbled up the priority list for them to do it. > S3Guard: Support running isolated unit tests separate from AWS integration > tests. > - > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13455) S3Guard: Write end user documentation.
[ https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13455: --- Priority: Major (was: Minor) > S3Guard: Write end user documentation. > -- > > Key: HADOOP-13455 > URL: https://issues.apache.org/jira/browse/HADOOP-13455 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Write end user documentation that describes S3Guard architecture, > configuration and usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13448) S3Guard: Define MetadataStore interface.
[ https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13448: --- Priority: Major (was: Minor) > S3Guard: Define MetadataStore interface. > > > Key: HADOOP-13448 > URL: https://issues.apache.org/jira/browse/HADOOP-13448 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Define the common interface for metadata store operations. This is the > interface that any metadata back-end must implement in order to integrate > with S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13451) S3Guard: Implement access policy using metadata store as source of truth.
[ https://issues.apache.org/jira/browse/HADOOP-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13451: --- Priority: Major (was: Minor) > S3Guard: Implement access policy using metadata store as source of truth. > - > > Key: HADOOP-13451 > URL: https://issues.apache.org/jira/browse/HADOOP-13451 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Implement an S3A access policy that provides strong consistency and improved > performance by using the metadata store as the source of truth for metadata > operations. In many cases, this will allow S3A to short-circuit calls to S3. > Assuming shorter latency for calls to the metadata store compared to S3, we > expect this will improve overall performance. With this policy, a client may > not be capable of reading data loaded into an S3 bucket by external tools > that don't integrate with the metadata store. Users need to be made aware of > this limitation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13450) S3Guard: Implement access policy providing strong consistency with S3 as source of truth.
[ https://issues.apache.org/jira/browse/HADOOP-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13450: --- Priority: Major (was: Minor) > S3Guard: Implement access policy providing strong consistency with S3 as > source of truth. > - > > Key: HADOOP-13450 > URL: https://issues.apache.org/jira/browse/HADOOP-13450 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Implement an S3A access policy that provides strong consistency by > cross-checking with the consistent metadata store, but still using S3 as the > the source of truth. This access policy will be well suited to users who > want an improved consistency guarantee but also want the freedom to load data > into the bucket using external tools that don't integrate with the metadata > store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.
[ https://issues.apache.org/jira/browse/HADOOP-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13454: --- Priority: Major (was: Minor) > S3Guard: Provide custom FileSystem Statistics. > -- > > Key: HADOOP-13454 > URL: https://issues.apache.org/jira/browse/HADOOP-13454 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Provide custom {{FileSystem}} {{Statistics}} with information about the > internal operational details of S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.
[ https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13447: --- Priority: Major (was: Minor) > S3Guard: Refactor S3AFileSystem to support introduction of separate metadata > repository and tests. > -- > > Key: HADOOP-13447 > URL: https://issues.apache.org/jira/browse/HADOOP-13447 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > The scope of this issue is to refactor the existing {{S3AFileSystem}} into > multiple coordinating classes. The goal of this refactoring is to separate > the {{FileSystem}} API binding from the AWS SDK integration, make code > maintenance easier while we're making changes for S3Guard, and make it easier > to mock some implementation details so that tests can simulate eventual > consistency behavior in a deterministic way. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13449: --- Priority: Major (was: Minor) > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.
[ https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13452: --- Priority: Major (was: Minor) > S3Guard: Implement access policy for intra-client consistency with in-memory > metadata store. > > > Key: HADOOP-13452 > URL: https://issues.apache.org/jira/browse/HADOOP-13452 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Implement an S3A access policy based on an in-memory metadata store. This > can provide consistency within the same client without needing to integrate > with an external system. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13446: --- Priority: Major (was: Minor) > S3Guard: Support running isolated unit tests separate from AWS integration > tests. > - > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13453) S3Guard: Instrument new functionality with Hadoop metrics.
[ https://issues.apache.org/jira/browse/HADOOP-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13453: --- Priority: Major (was: Minor) > S3Guard: Instrument new functionality with Hadoop metrics. > -- > > Key: HADOOP-13453 > URL: https://issues.apache.org/jira/browse/HADOOP-13453 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Provide Hadoop metrics showing operational details of the S3Guard > implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403080#comment-15403080 ] Mingliang Liu edited comment on HADOOP-13345 at 8/2/16 12:12 AM: - Thanks [~cnauroth] for updating the design doc and creating the sub-tasks. I suggest we elevate the sub-tasks' priority as "Major" as they are. was (Author: liuml07): Thanks for updating the design doc and creating the sub-tasks. I suggest we elevate the sub-tasks' priority as "Major" as they are. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13455) S3Guard: Write end user documentation.
Chris Nauroth created HADOOP-13455: -- Summary: S3Guard: Write end user documentation. Key: HADOOP-13455 URL: https://issues.apache.org/jira/browse/HADOOP-13455 Project: Hadoop Common Issue Type: Sub-task Reporter: Chris Nauroth Priority: Minor Write end user documentation that describes S3Guard architecture, configuration and usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13455) S3Guard: Write end user documentation.
[ https://issues.apache.org/jira/browse/HADOOP-13455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403082#comment-15403082 ] Chris Nauroth commented on HADOOP-13455: This task should be done late in the cycle so that we're not prematurely documenting an implementation in flux. > S3Guard: Write end user documentation. > -- > > Key: HADOOP-13455 > URL: https://issues.apache.org/jira/browse/HADOOP-13455 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > Write end user documentation that describes S3Guard architecture, > configuration and usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403080#comment-15403080 ] Mingliang Liu commented on HADOOP-13345: Thanks for updating the design doc and creating the sub-tasks. I suggest we elevate the sub-tasks' priority as "Major" as they are. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.
Chris Nauroth created HADOOP-13454: -- Summary: S3Guard: Provide custom FileSystem Statistics. Key: HADOOP-13454 URL: https://issues.apache.org/jira/browse/HADOOP-13454 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Provide custom {{FileSystem}} {{Statistics}} with information about the internal operational details of S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.
[ https://issues.apache.org/jira/browse/HADOOP-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13454: --- Priority: Minor (was: Major) > S3Guard: Provide custom FileSystem Statistics. > -- > > Key: HADOOP-13454 > URL: https://issues.apache.org/jira/browse/HADOOP-13454 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > Provide custom {{FileSystem}} {{Statistics}} with information about the > internal operational details of S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403079#comment-15403079 ] Mingliang Liu commented on HADOOP-13449: Feel free to assign it to me if your working queue is too long. Thanks [~cnauroth]. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13453) S3Guard: Instrument new functionality with Hadoop metrics.
Chris Nauroth created HADOOP-13453: -- Summary: S3Guard: Instrument new functionality with Hadoop metrics. Key: HADOOP-13453 URL: https://issues.apache.org/jira/browse/HADOOP-13453 Project: Hadoop Common Issue Type: Sub-task Reporter: Chris Nauroth Priority: Minor Provide Hadoop metrics showing operational details of the S3Guard implementation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.
Chris Nauroth created HADOOP-13452: -- Summary: S3Guard: Implement access policy for intra-client consistency with in-memory metadata store. Key: HADOOP-13452 URL: https://issues.apache.org/jira/browse/HADOOP-13452 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor Implement an S3A access policy based on an in-memory metadata store. This can provide consistency within the same client without needing to integrate with an external system. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.
[ https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403077#comment-15403077 ] Chris Nauroth commented on HADOOP-13452: For a similar concept, see this class in GCS: https://github.com/GoogleCloudPlatform/bigdata-interop/blob/1447da82f2bded2ac8493b07797a5c2483b70497/gcsio/src/main/java/com/google/cloud/hadoop/gcsio/InMemoryDirectoryListCache.java > S3Guard: Implement access policy for intra-client consistency with in-memory > metadata store. > > > Key: HADOOP-13452 > URL: https://issues.apache.org/jira/browse/HADOOP-13452 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Priority: Minor > > Implement an S3A access policy based on an in-memory metadata store. This > can provide consistency within the same client without needing to integrate > with an external system. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403073#comment-15403073 ] Mingliang Liu commented on HADOOP-13446: +1 (non-binding) for the proposal. As the S3Guard is using DynamoDB as the major metadata store, I suggest we consider use the DynamoDBLocal for assisting unit tests. I've been using it locally for my other Hadoop in the cloud projects and it basically works as expected. Does this make sense? > S3Guard: Support running isolated unit tests separate from AWS integration > tests. > - > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13451) S3Guard: Implement access policy using metadata store as source of truth.
Chris Nauroth created HADOOP-13451: -- Summary: S3Guard: Implement access policy using metadata store as source of truth. Key: HADOOP-13451 URL: https://issues.apache.org/jira/browse/HADOOP-13451 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor Implement an S3A access policy that provides strong consistency and improved performance by using the metadata store as the source of truth for metadata operations. In many cases, this will allow S3A to short-circuit calls to S3. Assuming shorter latency for calls to the metadata store compared to S3, we expect this will improve overall performance. With this policy, a client may not be capable of reading data loaded into an S3 bucket by external tools that don't integrate with the metadata store. Users need to be made aware of this limitation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13450) S3Guard: Implement access policy providing strong consistency with S3 as source of truth.
Chris Nauroth created HADOOP-13450: -- Summary: S3Guard: Implement access policy providing strong consistency with S3 as source of truth. Key: HADOOP-13450 URL: https://issues.apache.org/jira/browse/HADOOP-13450 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor Implement an S3A access policy that provides strong consistency by cross-checking with the consistent metadata store, but still using S3 as the the source of truth. This access policy will be well suited to users who want an improved consistency guarantee but also want the freedom to load data into the bucket using external tools that don't integrate with the metadata store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.
[ https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13447: --- Priority: Minor (was: Major) > S3Guard: Refactor S3AFileSystem to support introduction of separate metadata > repository and tests. > -- > > Key: HADOOP-13447 > URL: https://issues.apache.org/jira/browse/HADOOP-13447 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > > The scope of this issue is to refactor the existing {{S3AFileSystem}} into > multiple coordinating classes. The goal of this refactoring is to separate > the {{FileSystem}} API binding from the AWS SDK integration, make code > maintenance easier while we're making changes for S3Guard, and make it easier > to mock some implementation details so that tests can simulate eventual > consistency behavior in a deterministic way. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
Chris Nauroth created HADOOP-13449: -- Summary: S3Guard: Implement DynamoDBMetadataStore. Key: HADOOP-13449 URL: https://issues.apache.org/jira/browse/HADOOP-13449 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Priority: Minor Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13446: --- Priority: Minor (was: Major) > S3Guard: Support running isolated unit tests separate from AWS integration > tests. > - > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13448) S3Guard: Define MetadataStore interface.
Chris Nauroth created HADOOP-13448: -- Summary: S3Guard: Define MetadataStore interface. Key: HADOOP-13448 URL: https://issues.apache.org/jira/browse/HADOOP-13448 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor Define the common interface for metadata store operations. This is the interface that any metadata back-end must implement in order to integrate with S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.
[ https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403062#comment-15403062 ] Chris Nauroth commented on HADOOP-13447: The scope of this patch is intended to be purely refactoring pre-work with no logic changes to S3A. Depending on the final outcome, we may want to consider bringing the refactoring right into the main branches (trunk and branch-2). That will help with code maintenance on the main branches and minimize merge conflicts while merging trunk to the feature branch. > S3Guard: Refactor S3AFileSystem to support introduction of separate metadata > repository and tests. > -- > > Key: HADOOP-13447 > URL: https://issues.apache.org/jira/browse/HADOOP-13447 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > The scope of this issue is to refactor the existing {{S3AFileSystem}} into > multiple coordinating classes. The goal of this refactoring is to separate > the {{FileSystem}} API binding from the AWS SDK integration, make code > maintenance easier while we're making changes for S3Guard, and make it easier > to mock some implementation details so that tests can simulate eventual > consistency behavior in a deterministic way. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.
Chris Nauroth created HADOOP-13447: -- Summary: S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests. Key: HADOOP-13447 URL: https://issues.apache.org/jira/browse/HADOOP-13447 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Assignee: Chris Nauroth The scope of this issue is to refactor the existing {{S3AFileSystem}} into multiple coordinating classes. The goal of this refactoring is to separate the {{FileSystem}} API binding from the AWS SDK integration, make code maintenance easier while we're making changes for S3Guard, and make it easier to mock some implementation details so that tests can simulate eventual consistency behavior in a deterministic way. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403056#comment-15403056 ] Chris Nauroth commented on HADOOP-13446: One potential solution could be to move AWS integration tests into the integration-test phase of the Maven lifecycle. That's likely the first path I'll explore. > S3Guard: Support running isolated unit tests separate from AWS integration > tests. > - > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.
Chris Nauroth created HADOOP-13446: -- Summary: S3Guard: Support running isolated unit tests separate from AWS integration tests. Key: HADOOP-13446 URL: https://issues.apache.org/jira/browse/HADOOP-13446 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Chris Nauroth Assignee: Chris Nauroth Currently, the hadoop-aws module only runs Surefire if AWS credentials have been configured. This implies that all tests must run integrated with the AWS back-end. It also means that no tests run as part of ASF pre-commit. This issue proposes for the hadoop-aws module to support running isolated unit tests without integrating with AWS. This will benefit S3Guard, because we expect the need for isolated mock-based testing to simulate eventual consistency behavior. It also benefits hadoop-aws in general by allowing pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13345: --- Attachment: S3GuardImprovedConsistencyforS3AV2.pdf I am attaching a V2 design document that brings together content from both of the documents that were written independently and attached previously. Once again, feedback is welcome. If it's helpful, I also can set up a shared Google Doc for us to collaborate more directly on later revisions. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13081: -- Attachment: HADOOP-13081.03.patch > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, > HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13081: -- Attachment: (was: HADOOP-13081.03.patch) > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, > HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13081: -- Attachment: HADOOP-13081.03.patch fixed checkstyle... sigh > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, > HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403024#comment-15403024 ] Hadoop QA commented on HADOOP-13444: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 32s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 29s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 41s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 928 unchanged - 15 fixed = 928 total (was 943) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 56s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-nfs in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 15s{color} | {color:black} {color} | \\ \\ ||
[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403023#comment-15403023 ] Hadoop QA commented on HADOOP-13081: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 150 unchanged - 0 fixed = 151 total (was 150) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 29s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821435/HADOOP-13081.03.patch | | JIRA Issue | HADOOP-13081 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cc794176aac8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f473cf | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10146/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > add the ability to create multiple UGIs/subjects from one kerberos login > > >
[jira] [Commented] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters
[ https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402990#comment-15402990 ] Jason Lowe commented on HADOOP-13362: - bq. any reason to not putting this patch to 2.7.3? 2.7.3 was in lockdown for only critical blockers that were preventing it from being released, so I just committed it to branch-2.7. I'm OK with it going into 2.7.3 if there's still a chance to do so, assuming the 2.7.3 release manager is also OK with it. > DefaultMetricsSystem leaks the source name when a source unregisters > > > Key: HADOOP-13362 > URL: https://issues.apache.org/jira/browse/HADOOP-13362 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Junping Du >Priority: Critical > Fix For: 2.7.4 > > Attachments: HADOOP-13362-branch-2.7.patch > > > Ran across a nodemanager that was spending most of its time in GC. Upon > examination of the heap most of the memory was going to the map of names in > org.apache.hadoop.metrics2.lib.UniqueNames. In this case the map had almost > 2 million entries. Looking at a few of the map showed entries like > "ContainerResource_container_e01_1459548490386_8560138_01_002020", > "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc. > Looks like the ContainerMetrics for each container will cause a unique name > to be registered with UniqueNames and the name will never be unregistered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories
[ https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402954#comment-15402954 ] Chris Nauroth commented on HADOOP-13208: {quote} What is this for? I see it was introduced in HADOOP-13131, but don't see any usage of the configuration flag it sets (fs.s3a.impl.disable.cache). {quote} [~fabbri], sorry, I missed this comment last time around. This logic disables the {{FileSystem}} cache. The logic for this is inside the {{FileSystem}} base class. (See the static {{FileSystem#get(URI, Configuration)}} method.) Whether or not the cache is in effect is determined by a dynamic configuration property: {{fs..impl.disable.cache}}. In some cases, we might have a test suite with multiple test methods that all want to test slightly different configuration. However, all tests in a suite execute in the same process, so there would be a risk of reusing a cached instance across multiple tests. Unfortunately, the caching is not sensitive to changes in the {{Configuration}} instance. (The cache key is the scheme, authority, and {{UserGroupInformation}}.) To work around that, we disable the cache for these tests. > S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the > pseudo-tree of directories > > > Key: HADOOP-13208 > URL: https://issues.apache.org/jira/browse/HADOOP-13208 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13208-branch-2-001.patch, > HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, > HADOOP-13208-branch-2-009.patch, HADOOP-13208-branch-2-010.patch, > HADOOP-13208-branch-2-011.patch, HADOOP-13208-branch-2-012.patch, > HADOOP-13208-branch-2-017.patch, HADOOP-13208-branch-2-018.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > A major cost in split calculation against object stores turns out be listing > the directory tree itself. That's because against S3, it takes S3A two HEADs > and two lists to list the content of any directory path (2 HEADs + 1 list for > getFileStatus(); the next list to query the contents). > Listing a directory could be improved slightly by combining the final two > listings. However, a listing of a directory tree will still be > O(directories). In contrast, a recursive {{listFiles()}} operation should be > implementable by a bulk listing of all descendant paths; one List operation > per thousand descendants. > As the result of this call is an iterator, the ongoing listing can be > implemented within the iterator itself -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13081: --- Target Version/s: 2.8.0 Component/s: security Issue Type: Improvement (was: Bug) > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13081: --- Hadoop Flags: Reviewed +1 for patch 03, pending pre-commit run. > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login
[ https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HADOOP-13081: -- Attachment: HADOOP-13081.03.patch Updated > add the ability to create multiple UGIs/subjects from one kerberos login > > > Key: HADOOP-13081 > URL: https://issues.apache.org/jira/browse/HADOOP-13081 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, > HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.patch > > > We have a scenario where we log in with kerberos as a certain user for some > tasks, but also want to add tokens to the resulting UGI that would be > specific to each task. We don't want to authenticate with kerberos for every > task. > I am not sure how this can be accomplished with the existing UGI interface. > Perhaps some clone method would be helpful, similar to createProxyUser minus > the proxy stuff; or it could just relogin anew from ticket cache. > getUGIFromTicketCache seems like the best option in existing code, but there > doesn't appear to be a consistent way of handling ticket cache location - the > above method, that I only see called in test, is using a config setting that > is not used anywhere else, and the env variable for the location that is used > in the main ticket cache related methods is not set uniformly on all paths - > therefore, trying to find the correct ticket cache and passing it via the > config setting to getUGIFromTicketCache seems even hackier than doing the > clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user > parameter on the main path - it logs a warning for multiple principals and > then logs in with first available. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402887#comment-15402887 ] Kai Zheng commented on HADOOP-12756: Mingfei, The patch needs to be updated against HADOOP-12756 branch. The major relevant change is 3.0.0-alpha1-SNAPSHOT -> 3.0.0-alpha2-SNAPSHOT. Please check and make sure you're working/patching based on the same branch. Thanks. > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.007.patch, HADOOP-12756.008.patch, HCFS User manual.md, OSS > integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: HADOOP-13444.branch-2.patch rebase for branch-2 > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: (was: HADOOP-13444.branch-2.8.patch) > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: (was: HADOOP-13444.branch-2.patch) > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements
[ https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402836#comment-15402836 ] Chris Nauroth commented on HADOOP-13403: Thank you for sharing patch 003. If the reason for the unusual executor logic is optimization, then I suggest adding more comments in the {{executeParallel}} JavaDocs to explain that. I'm not sure that the memory optimization argument is true for the {{delete}} code path, where it still does a conversion from {{ArrayList}} to array. bq. Is there any way to achieve this through futures? If the code had followed idiomatic usage, then the typical solution is to call {{ThreadPoolExecutor#submit}} for each task, track every returned {{Future}} in a list, and then iterate through the list and call {{Future#get}} on each one. If any individual task threw an exception, then the call to {{Future#get}} would propagate that exception. Then, that would give you an opportunity to call {{ThreadPoolExecutor#shutdownNow}} to cancel or interrupt all remaining tasks. With the current logic though, I don't really see a way to adapt this pattern. Repeating an earlier comment, I don't see any exceptions thrown from {{getThreadPool}}, so coding exception handling around it and tests for it looks unnecessary. If you check validity of {{deleteThreadCount}} and {{renameThreadCount}} in {{initialize}} (e.g. check for values <= 0) and fail fast by throwing an exception during initialization, then even unchecked exceptions will be impossible during calls to {{getThreadPool}}. I still see numerous test failures in {{TestFileSystemOperationsWithThreads}}. For the next patch revision, would you please ensure all tests pass? > AzureNativeFileSystem rename/delete performance improvements > > > Key: HADOOP-13403 > URL: https://issues.apache.org/jira/browse/HADOOP-13403 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.7.2 >Reporter: Subramanyam Pattipaka >Assignee: Subramanyam Pattipaka > Fix For: 2.9.0 > > Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, > HADOOP-13403-003.patch > > > WASB Performance Improvements > Problem > --- > Azure Native File system operations like rename/delete which has large number > of directories and/or files in the source directory are experiencing > performance issues. Here are possible reasons > a)We first list all files under source directory hierarchically. This is > a serial operation. > b)After collecting the entire list of files under a folder, we delete or > rename files one by one serially. > c)There is no logging information available for these costly operations > even in DEBUG mode leading to difficulty in understanding wasb performance > issues. > Proposal > - > Step 1: Rename and delete operations will generate a list all files under the > source folder. We need to use azure flat listing option to get list with > single request to azure store. We have introduced config > fs.azure.flatlist.enable to enable this option. The default value is 'false' > which means flat listing is disabled. > Step 2: Create thread pool and threads dynamically based on user > configuration. These thread pools will be deleted after operation is over. > We are introducing introducing two new configs > a) fs.azure.rename.threads : Config to set number of rename > threads. Default value is 0 which means no threading. > b) fs.azure.delete.threads: Config to set number of delete > threads. Default value is 0 which means no threading. > We have provided debug log information on number of threads not used > for the operation which can be useful . > Failure Scenarios: > If we fail to create thread pool due to ANY reason (for example trying > create with thread count with large value such as 100), we fall back to > serialization operation. > Step 3: Bob operations can be done in parallel using multiple threads > executing following snippet > while ((currentIndex = fileIndex.getAndIncrement()) < files.length) { > FileMetadata file = files[currentIndex]; > Rename/delete(file); > } > The above strategy depends on the fact that all files are stored in a > final array and each thread has to determine synchronized next index to do > the job. The advantage of this strategy is that even if user configures large > number of unusable threads, we always ensure that work doesn’t get serialized > due to lagging threads. > We are logging following information which can be useful for tuning > number of threads > a) Number of unusable threads > b) Time taken by each thread > c) Number of files
[jira] [Commented] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters
[ https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402805#comment-15402805 ] Junping Du commented on HADOOP-13362: - Hi [~jlowe], forget to ask, any reason to not putting this patch to 2.7.3? > DefaultMetricsSystem leaks the source name when a source unregisters > > > Key: HADOOP-13362 > URL: https://issues.apache.org/jira/browse/HADOOP-13362 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.2 >Reporter: Jason Lowe >Assignee: Junping Du >Priority: Critical > Fix For: 2.7.4 > > Attachments: HADOOP-13362-branch-2.7.patch > > > Ran across a nodemanager that was spending most of its time in GC. Upon > examination of the heap most of the memory was going to the map of names in > org.apache.hadoop.metrics2.lib.UniqueNames. In this case the map had almost > 2 million entries. Looking at a few of the map showed entries like > "ContainerResource_container_e01_1459548490386_8560138_01_002020", > "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc. > Looks like the ContainerMetrics for each container will cause a unique name > to be registered with UniqueNames and the name will never be unregistered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402729#comment-15402729 ] Hadoop QA commented on HADOOP-13444: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-common-project: The patch generated 2 new + 975 unchanged - 15 fixed = 977 total (was 990) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s{color} | {color:green} hadoop-nfs in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 5s{color} |
[jira] [Updated] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
[ https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-12815: --- Resolution: Won't Fix Status: Resolved (was: Patch Available) This patch was superseded by HADOOP-12801, which annotated the tests with {{@Ignore}}, and HADOOP-13239, which deprecated {{s3://}}. > TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and > TestS3ContractRootDir#testRmRootRecursive fail on branch-2. > > > Key: HADOOP-12815 > URL: https://issues.apache.org/jira/browse/HADOOP-12815 > Project: Hadoop Common > Issue Type: Bug >Reporter: Chris Nauroth >Assignee: Matthew Paduano > Attachments: HADOOP-12815.branch-2.01.patch > > > TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and > TestS3ContractRootDir#testRmRootRecursive fail on branch-2. The tests pass > on trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13434) Add quoting to Shell class
[ https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402529#comment-15402529 ] Arpit Agarwal edited comment on HADOOP-13434 at 8/1/16 7:15 PM: The test failure looks unrelated (won't repro for me). I'd like to commit this tomorrow. Allen, I can post a separate patch for the fix you suggested after I commit this, assuming you don't object. was (Author: arpitagarwal): The test failure looks unrelated (won't repro for me). I'd like to commit this by EOD today. Allen, I can post a separate patch for the fix you suggested after I commit this, assuming you don't object. > Add quoting to Shell class > -- > > Key: HADOOP-13434 > URL: https://issues.apache.org/jira/browse/HADOOP-13434 > Project: Hadoop Common > Issue Type: Bug >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HADOOP-13434.patch, HADOOP-13434.patch, > HADOOP-13434.patch > > > The Shell class makes assumptions that the parameters won't have spaces or > other special characters, even when it invokes bash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding
[ https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402638#comment-15402638 ] Thomas Poepping commented on HADOOP-13344: -- bump? > Add option to exclude Hadoop's SLF4J binding > > > Key: HADOOP-13344 > URL: https://issues.apache.org/jira/browse/HADOOP-13344 > Project: Hadoop Common > Issue Type: New Feature > Components: bin, scripts >Affects Versions: 2.8.0, 2.7.2 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HADOOP-13344.patch > > > If another application that uses the Hadoop classpath brings in its own SLF4J > binding for logging, and that jar is not the exact same as the one brought in > by Hadoop, then there will be a conflict between logging jars between the two > classpaths. This patch introduces an optional setting to remove Hadoop's > SLF4J binding from the classpath, to get rid of this problem. > This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure > has been changed in 3.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13323) Downgrade stack trace on FS load from Warn to debug
[ https://issues.apache.org/jira/browse/HADOOP-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402554#comment-15402554 ] Chen Liang edited comment on HADOOP-13323 at 8/1/16 6:59 PM: - Hi [~ste...@apache.org], I'm trying to dig into the failed test as you mentioned in HADOOP-12588. But the test results report seems to be lost, could you elaborate a bit more on the failed test? Thanks. was (Author: vagarychen): Hi [~ste...@apache.org], I'm trying to dig into the failed test as you mentioned in HADOOP-12588. But the test results report seems to be lost, could you elaborate a bit more on the failed test? thanks! - Chen > Downgrade stack trace on FS load from Warn to debug > --- > > Key: HADOOP-13323 > URL: https://issues.apache.org/jira/browse/HADOOP-13323 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13323-branch-2-001.patch > > > HADOOP-12636 catches exceptions on FS creation, but prints a stack trace @ > warn every time..this is noisy and irrelevant if the installation doesn't > need connectivity to a specific filesystem or object store. > I propose: only printing the toString values of the exception chain @ warn; > the full stack comes out at debug. > We could some more tuning: > * have a specific log for this exception, which allows installations to turn > even the warnings off. > * add a link to a wiki page listing the dependencies of the shipped > filesystems -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics
[ https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402550#comment-15402550 ] Chen Liang edited comment on HADOOP-13439 at 8/1/16 6:59 PM: - Hi [~iwasakims], Thanks for assigning this to me. One question though, I was not able to reproduce this error in my environment, the test report of failed tests in HADOOP-13323 seems no longer accessible. I will leave a comment there. But do you mind elaborate a little bit more on the error and your thoughts there? Thanks. was (Author: vagarychen): Hi [~iwasakims], Thanks for assigning this to me. One question though, I was not able to reproduce this error in my environment, the test report of failed tests in HADOOP-13323 seems no longer accessible. I will leave a comment there. But do you mind elaborate a little bit more on the error and your thoughts there? Thanks, Chen > Fix race between TestMetricsSystemImpl and TestGangliaMetrics > - > > Key: HADOOP-13439 > URL: https://issues.apache.org/jira/browse/HADOOP-13439 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Masatake Iwasaki >Assignee: Chen Liang >Priority: Minor > > TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used. > {noformat} > 2016-06-27 15:21:31,480 INFO impl.MetricsSystemImpl > (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 > second(s). > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: HADOOP-13444.branch-2.8.patch rebased for branch-2.8 > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.8.patch, > HADOOP-13444.branch-2.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: HADOOP-13444.branch-2.patch rebased for branch-2 > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: (was: HADOOP-13444.branch-2.5.patch) > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402579#comment-15402579 ] Hadoop QA commented on HADOOP-13444: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-13444 does not apply to branch-2.5. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821370/HADOOP-13444.branch-2.5.patch | | JIRA Issue | HADOOP-13444 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10143/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.5.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent Poon updated HADOOP-13444: -- Attachment: HADOOP-13444.branch-2.5.patch rebased for branch-2 and branch-2.8 > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.branch-2.5.patch, > HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13323) Downgrade stack trace on FS load from Warn to debug
[ https://issues.apache.org/jira/browse/HADOOP-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402554#comment-15402554 ] Chen Liang commented on HADOOP-13323: - Hi [~ste...@apache.org], I'm trying to dig into the failed test as you mentioned in HADOOP-12588. But the test results report seems to be lost, could you elaborate a bit more on the failed test? thanks! - Chen > Downgrade stack trace on FS load from Warn to debug > --- > > Key: HADOOP-13323 > URL: https://issues.apache.org/jira/browse/HADOOP-13323 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13323-branch-2-001.patch > > > HADOOP-12636 catches exceptions on FS creation, but prints a stack trace @ > warn every time..this is noisy and irrelevant if the installation doesn't > need connectivity to a specific filesystem or object store. > I propose: only printing the toString values of the exception chain @ warn; > the full stack comes out at debug. > We could some more tuning: > * have a specific log for this exception, which allows installations to turn > even the warnings off. > * add a link to a wiki page listing the dependencies of the shipped > filesystems -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics
[ https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402550#comment-15402550 ] Chen Liang commented on HADOOP-13439: - Hi [~iwasakims], Thanks for assigning this to me. One question though, I was not able to reproduce this error in my environment, the test report of failed tests in HADOOP-13323 seems no longer accessible. I will leave a comment there. But do you mind elaborate a little bit more on the error and your thoughts there? Thanks, Chen > Fix race between TestMetricsSystemImpl and TestGangliaMetrics > - > > Key: HADOOP-13439 > URL: https://issues.apache.org/jira/browse/HADOOP-13439 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Masatake Iwasaki >Assignee: Chen Liang >Priority: Minor > > TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used. > {noformat} > 2016-06-27 15:21:31,480 INFO impl.MetricsSystemImpl > (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 > second(s). > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13434) Add quoting to Shell class
[ https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402529#comment-15402529 ] Arpit Agarwal commented on HADOOP-13434: The test failure looks unrelated (won't repro for me). I'd like to commit this by EOD today. Allen, I can post a separate patch for the fix you suggested after I commit this, assuming you don't object. > Add quoting to Shell class > -- > > Key: HADOOP-13434 > URL: https://issues.apache.org/jira/browse/HADOOP-13434 > Project: Hadoop Common > Issue Type: Bug >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HADOOP-13434.patch, HADOOP-13434.patch, > HADOOP-13434.patch > > > The Shell class makes assumptions that the parameters won't have spaces or > other special characters, even when it invokes bash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402418#comment-15402418 ] Hadoop QA commented on HADOOP-12756: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 39s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s{color} | {color:red} hadoop-tools in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s{color} | {color:red} hadoop-aliyun in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} hadoop-tools-dist in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 9s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 9s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s{color} | {color:red} hadoop-aliyun in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 12s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s{color} | {color:red} root in the patch failed. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 12s{color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821347/HADOOP-12756.008.patch | | JIRA Issue | HADOOP-12756 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux c557ac6a89df
[jira] [Commented] (HADOOP-13426) More efficiently build IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402325#comment-15402325 ] Hadoop QA commented on HADOOP-13426: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 193 unchanged - 3 fixed = 194 total (was 196) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 40s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12820981/HADOOP-13426.1.patch | | JIRA Issue | HADOOP-13426 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ddb2a2e496d3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f473cf | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10141/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10141/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10141/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > More efficiently build IPC responses > > > Key: HADOOP-13426 > URL: https://issues.apache.org/jira/browse/HADOOP-13426 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments:
[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12756: --- Attachment: HADOOP-12756.008.patch I have updated the HADOOP-12756 branch to sync with the latest trunk. Let's try the patch once more, reloading the same patch with new version. > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: shimingfei >Assignee: shimingfei > Fix For: HADOOP-12756 > > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, > HADOOP-12756.007.patch, HADOOP-12756.008.patch, HCFS User manual.md, OSS > integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13434) Add quoting to Shell class
[ https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402221#comment-15402221 ] Larry McCay commented on HADOOP-13434: -- [~aw] - that seems reasonable and correct to me. [~owen.omalley] - Have you checked the unit test failure from the jenkins run - it strikes me as unrelated. > Add quoting to Shell class > -- > > Key: HADOOP-13434 > URL: https://issues.apache.org/jira/browse/HADOOP-13434 > Project: Hadoop Common > Issue Type: Bug >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HADOOP-13434.patch, HADOOP-13434.patch, > HADOOP-13434.patch > > > The Shell class makes assumptions that the parameters won't have spaces or > other special characters, even when it invokes bash. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402109#comment-15402109 ] Kai Sasaki commented on HADOOP-13061: - [~drankye] Fixed check style. The test failure seems not related to the patch. Could you review this when you have a time? Thanks you. > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402092#comment-15402092 ] Hadoop QA commented on HADOOP-13061: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 27 unchanged - 13 fixed = 27 total (was 40) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821323/HADOOP-13061.02.patch | | JIRA Issue | HADOOP-13061 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7b822d10dbb4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 95694b7 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10140/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10140/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10140/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai
[jira] [Updated] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13061: Attachment: HADOOP-13061.02.patch > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems
[ https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402003#comment-15402003 ] Hadoop QA commented on HADOOP-9565: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 5m 28s{color} | {color:red} Docker failed to build yetus/hadoop:b59b8b7. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821320/HADOOP-9565-branch-2-007.patch | | JIRA Issue | HADOOP-9565 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10139/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add a Blobstore interface to add to blobstore FileSystems > - > > Key: HADOOP-9565 > URL: https://issues.apache.org/jira/browse/HADOOP-9565 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/s3, fs/swift >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Pieter Reuse > Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, > HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, > HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch > > > We can make the fact that some {{FileSystem}} implementations are really > blobstores, with different atomicity and consistency guarantees, by adding a > {{Blobstore}} interface to add to them. > This could also be a place to add a {{Copy(Path,Path)}} method, assuming that > all blobstores implement at server-side copy operation as a substitute for > rename. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems
[ https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pieter Reuse updated HADOOP-9565: - Attachment: HADOOP-9565-branch-2-007.patch renamed file to HADOOP-9565-branch-2-007.patch so Hadoop QA can apply it. > Add a Blobstore interface to add to blobstore FileSystems > - > > Key: HADOOP-9565 > URL: https://issues.apache.org/jira/browse/HADOOP-9565 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/s3, fs/swift >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Pieter Reuse > Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, > HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, > HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch > > > We can make the fact that some {{FileSystem}} implementations are really > blobstores, with different atomicity and consistency guarantees, by adding a > {{Blobstore}} interface to add to them. > This could also be a place to add a {{Copy(Path,Path)}} method, assuming that > all blobstores implement at server-side copy operation as a substitute for > rename. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems
[ https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pieter Reuse updated HADOOP-9565: - Attachment: (was: HADOOP-9565-007.patch) > Add a Blobstore interface to add to blobstore FileSystems > - > > Key: HADOOP-9565 > URL: https://issues.apache.org/jira/browse/HADOOP-9565 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/s3, fs/swift >Affects Versions: 2.6.0 >Reporter: Steve Loughran >Assignee: Pieter Reuse > Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, > HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, > HADOOP-9565-006.patch > > > We can make the fact that some {{FileSystem}} implementations are really > blobstores, with different atomicity and consistency guarantees, by adding a > {{Blobstore}} interface to add to them. > This could also be a place to add a {{Copy(Path,Path)}} method, assuming that > all blobstores implement at server-side copy operation as a substitute for > rename. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401967#comment-15401967 ] Oscar Morante commented on HADOOP-13075: Thanks, I'll try that. > Add support for SSE-KMS and SSE-C in s3a filesystem > --- > > Key: HADOOP-13075 > URL: https://issues.apache.org/jira/browse/HADOOP-13075 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Andrew Olson >Assignee: Federico Czerwinski > > S3 provides 3 types of server-side encryption [1], > * SSE-S3 (Amazon S3-Managed Keys) [2] > * SSE-KMS (AWS KMS-Managed Keys) [3] > * SSE-C (Customer-Provided Keys) [4] > Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 > (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. > With native support in aws-java-sdk already available it should be fairly > straightforward [6],[7] to support the other two types of SSE with some > additional fs.s3a configuration properties. > [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html > [2] > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html > [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html > [4] > http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html > [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html > [6] > http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java > [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401964#comment-15401964 ] Andrew Olson commented on HADOOP-13075: --- [~spacepluk] For specifying the default behavior it would be best to just leave {{fs.s3a.server-side-encryption-algorithm}} unset -- if for scripted consistency purposes you have concluded that you must set it, I believe an empty string value would work. > Add support for SSE-KMS and SSE-C in s3a filesystem > --- > > Key: HADOOP-13075 > URL: https://issues.apache.org/jira/browse/HADOOP-13075 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Andrew Olson >Assignee: Federico Czerwinski > > S3 provides 3 types of server-side encryption [1], > * SSE-S3 (Amazon S3-Managed Keys) [2] > * SSE-KMS (AWS KMS-Managed Keys) [3] > * SSE-C (Customer-Provided Keys) [4] > Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 > (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. > With native support in aws-java-sdk already available it should be fairly > straightforward [6],[7] to support the other two types of SSE with some > additional fs.s3a configuration properties. > [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html > [2] > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html > [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html > [4] > http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html > [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html > [6] > http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java > [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401955#comment-15401955 ] Oscar Morante commented on HADOOP-13075: Is there a value that I can use to specify the default behavior? I have a script that populates the configuration from env variables (to use in a docker container) and that would be helpful. > Add support for SSE-KMS and SSE-C in s3a filesystem > --- > > Key: HADOOP-13075 > URL: https://issues.apache.org/jira/browse/HADOOP-13075 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Andrew Olson >Assignee: Federico Czerwinski > > S3 provides 3 types of server-side encryption [1], > * SSE-S3 (Amazon S3-Managed Keys) [2] > * SSE-KMS (AWS KMS-Managed Keys) [3] > * SSE-C (Customer-Provided Keys) [4] > Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 > (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. > With native support in aws-java-sdk already available it should be fairly > straightforward [6],[7] to support the other two types of SSE with some > additional fs.s3a configuration properties. > [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html > [2] > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html > [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html > [4] > http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html > [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html > [6] > http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java > [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401948#comment-15401948 ] Andrew Olson commented on HADOOP-13075: --- [~spacepluk] Not encrypting is the default behavior, so no configuration needs to be set in that case. > Add support for SSE-KMS and SSE-C in s3a filesystem > --- > > Key: HADOOP-13075 > URL: https://issues.apache.org/jira/browse/HADOOP-13075 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Andrew Olson >Assignee: Federico Czerwinski > > S3 provides 3 types of server-side encryption [1], > * SSE-S3 (Amazon S3-Managed Keys) [2] > * SSE-KMS (AWS KMS-Managed Keys) [3] > * SSE-C (Customer-Provided Keys) [4] > Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 > (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. > With native support in aws-java-sdk already available it should be fairly > straightforward [6],[7] to support the other two types of SSE with some > additional fs.s3a configuration properties. > [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html > [2] > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html > [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html > [4] > http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html > [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html > [6] > http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java > [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401739#comment-15401739 ] Hudson commented on HADOOP-13444: - SUCCESS: Integrated in Hadoop-trunk-Commit #10186 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10186/]) HADOOP-13444. Replace org.apache.commons.io.Charsets with (aajisaka: rev 770b5eb2db686275df445be9280e76cc3710ffdc) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ShellBasedIdMapping.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/UserProvider.java * hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/MKDIR3Request.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HtmlQuoting.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/LOOKUP3Request.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/BZip2Codec.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/CREATE3Request.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestGangliaMetrics.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/TFileDumper.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/SYMLINK3Request.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/REMOVE3Request.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/FileBasedIPList.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountResponse.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/ganglia/AbstractGangliaSink.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/LINK3Request.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/XDR.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/MKNOD3Request.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/FileHandle.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceAdmin.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/security/CredentialsSys.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/GraphiteSink.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProvider.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/RENAME3Request.java * hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/request/RMDIR3Request.java > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties
[ https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401728#comment-15401728 ] Yuanbo Liu commented on HADOOP-13441: - The test failure seems related to HADOOP-12588 and HADOOP-13439 > Document LdapGroupsMapping keystore password properties > --- > > Key: HADOOP-13441 > URL: https://issues.apache.org/jira/browse/HADOOP-13441 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Yuanbo Liu >Priority: Minor > Labels: documentation > Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch > > > A few properties are not documented. > {{hadoop.security.group.mapping.ldap.ssl.keystore.password}} > This property is used as an alias to get password from credential providers, > or, fall back to using the value as password in clear text. There is also a > caveat that credential providers can not be a HDFS-based file system, as > mentioned in HADOOP-11934, to prevent cyclic dependency issue. > This should be documented in core-default.xml and GroupsMapping.md > {{hadoop.security.credential.clear-text-fallback}} > This property controls whether or not to fall back to storing credential > password as cleartext. > This should be documented in core-default.xml. > {{hadoop.security.credential.provider.path}} > This is mentioned in _CredentialProvider API Guide_, but not in > core-default.xml > The "Supported Features" in _CredentialProvider API Guide_ should link back > to GroupsMapping.md#LDAP Groups Mapping > {{hadoop.security.credstore.java-keystore-provider.password-file}} > This is the password file to protect credential files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13444: --- Fix Version/s: 3.0.0-alpha2 > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401714#comment-15401714 ] Akira Ajisaka commented on HADOOP-13444: Committed this to trunk. Hi [~vincentpoon], would you rebase the patch for branch-2 and branch-2.8? > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13444) Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets
[ https://issues.apache.org/jira/browse/HADOOP-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401712#comment-15401712 ] Akira Ajisaka commented on HADOOP-13444: +1, checking this in. > Replace org.apache.commons.io.Charsets with java.nio.charset.StandardCharsets > - > > Key: HADOOP-13444 > URL: https://issues.apache.org/jira/browse/HADOOP-13444 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: Vincent Poon >Assignee: Vincent Poon >Priority: Minor > Attachments: HADOOP-13444.2.patch, HADOOP-13444.3.patch, > HADOOP-13444.4.patch, HADOOP-13444.5.patch, HADOOP-13444.patch > > > org.apache.commons.io.Charsets is deprecated in favor of > java.nio.charset.StandardCharsets -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties
[ https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401700#comment-15401700 ] Hadoop QA commented on HADOOP-13441: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 2m 10s{color} | {color:red} hadoop-common in trunk failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 2m 23s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 0s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12821277/HADOOP-13441.002.patch | | JIRA Issue | HADOOP-13441 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 60bb33329486 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 34ccaa8 | | Default Java | 1.8.0_101 | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/10138/artifact/patchprocess/branch-mvnsite-hadoop-common-project_hadoop-common.txt | | findbugs | v3.0.0 | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/10138/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10138/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10138/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10138/console | | Powered by | Apache Yetus