[jira] [Created] (HADOOP-12369) Point hadoop-project/pom.xml java.security.krb5.conf within target folder
Andrew Wang created HADOOP-12369: Summary: Point hadoop-project/pom.xml java.security.krb5.conf within target folder Key: HADOOP-12369 URL: https://issues.apache.org/jira/browse/HADOOP-12369 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.7.1 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor This is used in the unit test environment, pointing within the src tree is naughty. The fix is simply to update to point within the target directory instead: {noformat} - ${basedir}/src/test/resources/krb5.conf + ${test.cache.data}/krb5.conf {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12264) Update create-release.sh to pass -Preleasedocs
Andrew Wang created HADOOP-12264: Summary: Update create-release.sh to pass -Preleasedocs Key: HADOOP-12264 URL: https://issues.apache.org/jira/browse/HADOOP-12264 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang We use the create-release.sh script to create the release artifacts. With the way of autogenerating CHANGES.txt we need to update the script to also pass -Preleasedocs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-12264) Update create-release.sh to pass -Preleasedocs
[ https://issues.apache.org/jira/browse/HADOOP-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-12264. -- Resolution: Duplicate This is a dupe of HADOOP-11793 Update create-release.sh to pass -Preleasedocs -- Key: HADOOP-12264 URL: https://issues.apache.org/jira/browse/HADOOP-12264 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang We use the create-release.sh script to create the release artifacts. With the way of autogenerating CHANGES.txt we need to update the script to also pass -Preleasedocs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12194) Support for incremental generation in the protoc plugin
Andrew Wang created HADOOP-12194: Summary: Support for incremental generation in the protoc plugin Key: HADOOP-12194 URL: https://issues.apache.org/jira/browse/HADOOP-12194 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.6-alpha Reporter: Andrew Wang Assignee: Andrew Wang The protoc maven plugin currently generates new Java classes every time, which means Maven always picks up changed files in the build. It would be better if the protoc plugin only generated new Java classes when the source protoc files change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12193) Rename Touchz.java to Touch.java
Andrew Wang created HADOOP-12193: Summary: Rename Touchz.java to Touch.java Key: HADOOP-12193 URL: https://issues.apache.org/jira/browse/HADOOP-12193 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.6-alpha Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial The top level class in Touchz.java is named Touch. This means Maven's changed file detection doesn't work; it shows up as always changed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12195) Add annotation to package-info.java file to workaround MCOMPILER-205
Andrew Wang created HADOOP-12195: Summary: Add annotation to package-info.java file to workaround MCOMPILER-205 Key: HADOOP-12195 URL: https://issues.apache.org/jira/browse/HADOOP-12195 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.0.6-alpha Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Attachments: hadoop-12195.001.patch Maven fails incremental builds when there are source files that do not generate class files. One example of this is package-info.java. This (old) issue is tracked at MCOMPILER-205, where the recommended workaround is adding an annotation to the class. After this, with some other related fixes, I was able to do a proper incremental maven build. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12055) Deprecate usage of NativeIO#link
Andrew Wang created HADOOP-12055: Summary: Deprecate usage of NativeIO#link Key: HADOOP-12055 URL: https://issues.apache.org/jira/browse/HADOOP-12055 Project: Hadoop Common Issue Type: Improvement Components: native Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Since our min version is now JDK7, there's hardlink support via {{Files}}. This means we can deprecate the JNI implementation and discontinue usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
Andrew Wang created HADOOP-11885: Summary: hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11852) Disable symlinks in trunk
Andrew Wang created HADOOP-11852: Summary: Disable symlinks in trunk Key: HADOOP-11852 URL: https://issues.apache.org/jira/browse/HADOOP-11852 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang In HADOOP-10020 and HADOOP-10162 we disabled symlinks in branch-2. Since there's currently no plan to finish this work, let's disable it in trunk too. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11711) Provide a default value for CryptoCodec classes
Andrew Wang created HADOOP-11711: Summary: Provide a default value for CryptoCodec classes Key: HADOOP-11711 URL: https://issues.apache.org/jira/browse/HADOOP-11711 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Users can configure the desired class to use for a given codec via a property like {{hadoop.security.crypto.codec.classes.aes.ctr.nopadding}}. However, even though we provide a default value for this codec in {{core-default.xml}}, this default is not also done in the code. As a result, client deployments that do not include {{core-default.xml}} cannot resolve any codecs, and get an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11344) KMS kms-config.sh sets a default value for the keystore password even in non-ssl setup
[ https://issues.apache.org/jira/browse/HADOOP-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-11344. -- Resolution: Fixed Fix Version/s: 2.7.0 Committed to trunk and branch-2, thanks Arun! KMS kms-config.sh sets a default value for the keystore password even in non-ssl setup -- Key: HADOOP-11344 URL: https://issues.apache.org/jira/browse/HADOOP-11344 Project: Hadoop Common Issue Type: Bug Reporter: Arun Suresh Assignee: Arun Suresh Fix For: 2.7.0 Attachments: HADOOP-11344.1.patch, HADOOP-11344.2.patch, HADOOP-11344.3.patch, HADOOP-11344.4.patch This results in kms always starting up in ssl mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11311) Restrict uppercase key names from being created with JCEKS
Andrew Wang created HADOOP-11311: Summary: Restrict uppercase key names from being created with JCEKS Key: HADOOP-11311 URL: https://issues.apache.org/jira/browse/HADOOP-11311 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.5.1 Reporter: Andrew Wang Assignee: Andrew Wang The Java KeyStore spec is ambiguous about the requirements for case-sensitivity for KeyStore implementations. The JDK7 JCEKS is not case-sensitive. This makes it difficult to migrate from JCEKS to case-sensitive implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11312) Fix TestKMS to not use uppercase key names
Andrew Wang created HADOOP-11312: Summary: Fix TestKMS to not use uppercase key names Key: HADOOP-11312 URL: https://issues.apache.org/jira/browse/HADOOP-11312 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang After HADOOP-11311 uppercase key names aren't allowed, breaking some unit tests. Let's fix them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11173) Improve error messages for some KeyShell commands
Andrew Wang created HADOOP-11173: Summary: Improve error messages for some KeyShell commands Key: HADOOP-11173 URL: https://issues.apache.org/jira/browse/HADOOP-11173 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor A few KeyShell commands don't print the exception messages and just swallow the exception, resulting in a non-specific error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11166) Remove ulimit from test-patch.sh
Andrew Wang created HADOOP-11166: Summary: Remove ulimit from test-patch.sh Key: HADOOP-11166 URL: https://issues.apache.org/jira/browse/HADOOP-11166 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hadoop-11166.001.patch We set a ulimit in test-patch.sh on the number of open files. We also hit this limit all the time leading to test failures. Let's remove this. Thanks [~abayer] for finding this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11159) Use 'git apply' to apply patch instead of 'patch' command in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-11159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-11159. -- Resolution: Duplicate Dupe of HADOOP-10926, suggesting we do the same thing. Use 'git apply' to apply patch instead of 'patch' command in Jenkins Key: HADOOP-11159 URL: https://issues.apache.org/jira/browse/HADOOP-11159 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.6.0 Reporter: Akira AJISAKA Currently a patch to change files with CR+LF (such as *.cmd) created by 'git diff' command cannot be applied by 'patch' command because 'git diff' outputs no CR+LF. Probably almost all the developers use 'git diff' or 'git format-patch' to create a patch as the SCM has moved to Git. Therefore Jenkins should use 'git apply' to apply patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11153) Make number of KMS threads configurable
Andrew Wang created HADOOP-11153: Summary: Make number of KMS threads configurable Key: HADOOP-11153 URL: https://issues.apache.org/jira/browse/HADOOP-11153 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 2.6.0 Reporter: Andrew Wang Assignee: Andrew Wang Would be nice to make the # of KMS threads configurable. The Tomcat default is 200, but we can also up this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11110) JavaKeystoreProvider should not report a key as created if it was not flushed to the backing file
Andrew Wang created HADOOP-0: Summary: JavaKeystoreProvider should not report a key as created if it was not flushed to the backing file Key: HADOOP-0 URL: https://issues.apache.org/jira/browse/HADOOP-0 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.5.0 Reporter: Andrew Wang Testing with the KMS backed by JKS reveals the following: {noformat} [root@dlo-4 ~]# hadoop key create testkey -provider kms://http@localhost:16000/kms testkey has not been created. Mkdirs failed to create file:x stack trace [root@dlo-4 ~]# hadoop key list -provider kms://http@localhost:16000/kms Listing keys for KeyProvider: KMSClientProvider[http://localhost:16000/kms/v1/] testkey {noformat} The JKS still has the key in memory and serves it up, but will disappear if the KMS is restarted since it's not flushed to the file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-10150) Hadoop cryptographic file system
[ https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10150. -- Resolution: Fixed I've committed this to trunk as part of merging fs-encryption. Thanks for all the work from all contributors here, especially [~hitliuyi]! Hadoop cryptographic file system Key: HADOOP-10150 URL: https://issues.apache.org/jira/browse/HADOOP-10150 Project: Hadoop Common Issue Type: New Feature Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Labels: rhino Fix For: 3.0.0 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file system-V2.docx, HADOOP cryptographic file system.pdf, HDFSDataAtRestEncryptionAlternatives.pdf, HDFSDataatRestEncryptionAttackVectors.pdf, HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based on INode feature.patch There is an increasing need for securing data when Hadoop customers use various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so on. HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based on HADOOP “FilterFileSystem” decorating DFS or other file systems, and transparent to upper layer applications. It’s configurable, scalable and fast. High level requirements: 1.Transparent to and no modification required for upper layer applications. 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if the wrapped file system supports them. 3.Very high performance for encryption and decryption, they will not become bottleneck. 4.Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS. 5.Admin can configure encryption policies, such as which directory will be encrypted. 6.A robust key management framework. 7.Support Pread and append operations if the wrapped file system supports them. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10984) Add hostname to filename of KMS log
Andrew Wang created HADOOP-10984: Summary: Add hostname to filename of KMS log Key: HADOOP-10984 URL: https://issues.apache.org/jira/browse/HADOOP-10984 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Alejandro Abdelnur It'd be nice if, rather than being named kms.log, the log filename included the hostname, e.g. kms-${hostname}.log. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10970) Cleanup KMS configuration keys
Andrew Wang created HADOOP-10970: Summary: Cleanup KMS configuration keys Key: HADOOP-10970 URL: https://issues.apache.org/jira/browse/HADOOP-10970 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang It'd be nice to add descriptions to the config keys in kms-site.xml. Also, it'd be good to rename key.provider.path to key.provider.uri for clarity, or just drop .path. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Reopened] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter
[ https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-10876: -- The constructor of Path should not take an empty URL as a parameter --- Key: HADOOP-10876 URL: https://issues.apache.org/jira/browse/HADOOP-10876 Project: Hadoop Common Issue Type: Bug Reporter: zhihai xu Assignee: zhihai xu Fix For: 2.6.0 Attachments: HADOOP-10876.000.patch, HADOOP-10876.001.patch The constructor of Path should not take an empty URL as a parameter, As discussed in HADOOP-10820, This JIRA is to change Path constructor at public Path(URI aUri) to check the empty URI and throw IllegalArgumentException for empty URI. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter
[ https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10876. -- Resolution: Won't Fix I reverted this out of trunk and branch-2 after some discussion with Zhihai. We'll fix the user-level issue elsewhere. The constructor of Path should not take an empty URL as a parameter --- Key: HADOOP-10876 URL: https://issues.apache.org/jira/browse/HADOOP-10876 Project: Hadoop Common Issue Type: Bug Reporter: zhihai xu Assignee: zhihai xu Fix For: 2.6.0 Attachments: HADOOP-10876.000.patch, HADOOP-10876.001.patch The constructor of Path should not take an empty URL as a parameter, As discussed in HADOOP-10820, This JIRA is to change Path constructor at public Path(URI aUri) to check the empty URI and throw IllegalArgumentException for empty URI. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10936) Change default KeyProvider bitlength to 128
Andrew Wang created HADOOP-10936: Summary: Change default KeyProvider bitlength to 128 Key: HADOOP-10936 URL: https://issues.apache.org/jira/browse/HADOOP-10936 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang You need to download unlimited strength JCE to work with 256-bit keys. It'd be good to change the default to 128 to avoid needing the unlimited strength JCE, and print out the bitlength being used in places. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10922) User documentation for CredentialShell
Andrew Wang created HADOOP-10922: Summary: User documentation for CredentialShell Key: HADOOP-10922 URL: https://issues.apache.org/jira/browse/HADOOP-10922 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Andrew Wang The CredentialShell needs end user documentation for the website. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10923) User documentation for KeyShell
Andrew Wang created HADOOP-10923: Summary: User documentation for KeyShell Key: HADOOP-10923 URL: https://issues.apache.org/jira/browse/HADOOP-10923 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang The KeyShell needs user documentation for the website. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10926) Improve test-patch.sh to apply binary diffs
Andrew Wang created HADOOP-10926: Summary: Improve test-patch.sh to apply binary diffs Key: HADOOP-10926 URL: https://issues.apache.org/jira/browse/HADOOP-10926 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang The Unix {{patch}} command cannot apply binary diffs as generated via {{git diff --binary}}. This means we cannot get effective test-patch.sh runs when the patch requires adding a binary file. We should consider using a different patch method. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10900) CredentialShell args should use single-dash style
Andrew Wang created HADOOP-10900: Summary: CredentialShell args should use single-dash style Key: HADOOP-10900 URL: https://issues.apache.org/jira/browse/HADOOP-10900 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor As was discussed in HADOOP-10793 related to KeyShell, we should standardize on single-dash flags for things in branch-2. CredentialShell also needs to be updated. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10891) Add factory methods to KeyProviderCryptoExtension
Andrew Wang created HADOOP-10891: Summary: Add factory methods to KeyProviderCryptoExtension Key: HADOOP-10891 URL: https://issues.apache.org/jira/browse/HADOOP-10891 Project: Hadoop Common Issue Type: Improvement Reporter: Andrew Wang Assignee: Andrew Wang For fs-encryption, we need to create a EncryptedKeyVersion from its component parts for decryption. We also need a way of getting a KPCE from a conf. Both of these can be done with factory methods. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension
Andrew Wang created HADOOP-10881: Summary: Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension Key: HADOOP-10881 URL: https://issues.apache.org/jira/browse/HADOOP-10881 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10856) HarFileSystem and HarFs support for HDFS encryption
Andrew Wang created HADOOP-10856: Summary: HarFileSystem and HarFs support for HDFS encryption Key: HADOOP-10856 URL: https://issues.apache.org/jira/browse/HADOOP-10856 Project: Hadoop Common Issue Type: Bug Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Andrew Wang Assignee: Andrew Wang We need to examine support for Har with HDFS encryption. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10837) Fix the failures from TestSymlinkLocalFSFileContext TestSymlinkLocalFSFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10837. -- Resolution: Duplicate Hi Uma, I think this is a dupe of HADOOP-10510. Please re-open if you think this is not the case. Fix the failures from TestSymlinkLocalFSFileContext TestSymlinkLocalFSFileSystem -- Key: HADOOP-10837 URL: https://issues.apache.org/jira/browse/HADOOP-10837 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 3.0.0 Reporter: Uma Maheswara Rao G There are failures in trunk: {noformat} java.io.IOException: Path file:/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test1/test/link is not a symbolic link at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:266) at org.apache.hadoop.fs.RawLocalFileSystem.getLinkTarget(RawLocalFileSystem.java:813) at org.apache.hadoop.fs.LocalFileSystem.getLinkTarget(LocalFileSystem.java:165) at org.apache.hadoop.fs.FileSystemTestWrapper.getLinkTarget(FileSystemTestWrapper.java:305) at org.apache.hadoop.fs.SymlinkBaseTest.testCreateLinkToDotDotPrefix(SymlinkBaseTest.java:818) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {noformat} log: {noformat} 2014-07-15 18:06:11,235 WARN fs.FileUtil (FileUtil.java:symLink(829)) - Command 'ln -s /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test1/file /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test2/linkToFile' failed 1 with: ln: failed to create symbolic link '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test2/linkToFile': No such file or directory 2014-07-15 18:06:11,433 WARN fs.FileUtil (FileUtil.java:symLink(829)) - Command 'ln -s /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test1/file /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test1/linkToFile' failed 1 with: ln: failed to create symbolic link '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/test/data/pUXLoMXfzU/test1/linkToFile': File exists {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10802) Add metrics for KMS client and server encrypted key caches
Andrew Wang created HADOOP-10802: Summary: Add metrics for KMS client and server encrypted key caches Key: HADOOP-10802 URL: https://issues.apache.org/jira/browse/HADOOP-10802 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Arun Suresh HADOOP-10720 is adding KMS server and client caches for encrypted keys for performance reasons. It would be good to add metrics to make sure that the cache is working as expected, and to inform future dynamic cache sizing and refilling policies. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10713) Document thread-safety of CryptoCodec#generateSecureRandom
Andrew Wang created HADOOP-10713: Summary: Document thread-safety of CryptoCodec#generateSecureRandom Key: HADOOP-10713 URL: https://issues.apache.org/jira/browse/HADOOP-10713 Project: Hadoop Common Issue Type: Sub-task Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Random implementations have to deal with thread-safety; this should be specified in the javadoc so implementors know to do this for CryptoCodec subclasses. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10548) Improve FsShell xattr error handling and other fixes
Andrew Wang created HADOOP-10548: Summary: Improve FsShell xattr error handling and other fixes Key: HADOOP-10548 URL: https://issues.apache.org/jira/browse/HADOOP-10548 Project: Hadoop Common Issue Type: Sub-task Affects Versions: HDFS XAttrs (HDFS-2006) Reporter: Andrew Wang Assignee: Charles Lamb Priority: Minor A couple small remaining issues from HADOOP-10521 we should address in this follow-on JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10521) FsShell commands for extended attributes.
[ https://issues.apache.org/jira/browse/HADOOP-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10521. -- Resolution: Fixed Fix Version/s: HDFS XAttrs (HDFS-2006) Target Version/s: HDFS XAttrs (HDFS-2006) (was: 3.0.0) Hey all, I'm +1 on this as well. I committed the latest version of this patch to the branch and created HADOOP-10548 to handle the remaining feedback. I assigned it to Charlie since he had most of the remaining comments, and he can pick up the Enum handling there as well. Thanks all! FsShell commands for extended attributes. - Key: HADOOP-10521 URL: https://issues.apache.org/jira/browse/HADOOP-10521 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: HDFS XAttrs (HDFS-2006) Reporter: Yi Liu Assignee: Yi Liu Fix For: HDFS XAttrs (HDFS-2006) Attachments: HADOOP-10521.1.patch, HADOOP-10521.2.patch, HADOOP-10521.3.patch, HADOOP-10521.patch “setfattr” and “getfattr” commands are added to FsShell for XAttr, and these are the same as in Linux. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10546) Javadoc and other small fixes for extended attributes in hadoop-common
Andrew Wang created HADOOP-10546: Summary: Javadoc and other small fixes for extended attributes in hadoop-common Key: HADOOP-10546 URL: https://issues.apache.org/jira/browse/HADOOP-10546 Project: Hadoop Common Issue Type: Sub-task Reporter: Andrew Wang Priority: Minor There are some additional comments from [~clamb] and [~vinayrpet] related to javadoc and other small fixes on HADOOP-10520, let's fix them in this follow-on JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10546) Javadoc and other small fixes for extended attributes in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10546. -- Resolution: Fixed Fix Version/s: HDFS XAttrs (HDFS-2006) Committed to branch, thanks Charles and Vinay. Javadoc and other small fixes for extended attributes in hadoop-common -- Key: HADOOP-10546 URL: https://issues.apache.org/jira/browse/HADOOP-10546 Project: Hadoop Common Issue Type: Sub-task Components: fs Reporter: Andrew Wang Assignee: Charles Lamb Priority: Minor Fix For: HDFS XAttrs (HDFS-2006) Attachments: HADOOP-10546.1.patch There are some additional comments from [~clamb] and [~vinayrpet] related to javadoc and other small fixes on HADOOP-10520, let's fix them in this follow-on JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10317) Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT
Andrew Wang created HADOOP-10317: Summary: Rename branch-2.3 release version from 2.4.0-SNAPSHOT to 2.3.0-SNAPSHOT Key: HADOOP-10317 URL: https://issues.apache.org/jira/browse/HADOOP-10317 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.3.0 Reporter: Andrew Wang Assignee: Andrew Wang Right now the pom.xml's refer to 2.4 rather than 2.3 in branch-2.3. We need to update them. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Reopened] (HADOOP-10112) har file listing doesn't work with wild card
[ https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-10112: -- har file listing doesn't work with wild card - Key: HADOOP-10112 URL: https://issues.apache.org/jira/browse/HADOOP-10112 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 2.2.0 Reporter: Brandon Li Assignee: Brandon Li Fix For: 2.3.0 Attachments: HADOOP-10112.004.patch [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/* -ls: Can not create a Path from an empty string Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...] It works without *. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HADOOP-10112) har file listing doesn't work with wild card
[ https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10112. -- Resolution: Invalid har file listing doesn't work with wild card - Key: HADOOP-10112 URL: https://issues.apache.org/jira/browse/HADOOP-10112 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 2.2.0 Reporter: Brandon Li Assignee: Brandon Li Fix For: 2.3.0 Attachments: HADOOP-10112.004.patch [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/* -ls: Can not create a Path from an empty string Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...] It works without *. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Reopened] (HADOOP-10198) DomainSocket: add support for socketpair
[ https://issues.apache.org/jira/browse/HADOOP-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-10198: -- Sorry, need to reopen this so we can get a Jenkins run for completeness. DomainSocket: add support for socketpair Key: HADOOP-10198 URL: https://issues.apache.org/jira/browse/HADOOP-10198 Project: Hadoop Common Issue Type: Improvement Components: native Affects Versions: 2.4.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.4.0 Attachments: HADOOP-10198.001.patch Add support for {{DomainSocket#socketpair}}. This function uses the POSIX function of the same name to create two UNIX domain sockets which are connected to each other. This will be useful for HDFS-5182. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HADOOP-10120) Additional sliding window metrics
Andrew Wang created HADOOP-10120: Summary: Additional sliding window metrics Key: HADOOP-10120 URL: https://issues.apache.org/jira/browse/HADOOP-10120 Project: Hadoop Common Issue Type: New Feature Components: metrics Affects Versions: 2.2.0 Reporter: Andrew Wang Assignee: Andrew Wang For HDFS-5350 we'd like to report the last few fsimage transfer times as a health metric. This would mean (for example) a sliding window of the last 10 transfer times, when it was last updated, the total count. It'd be nice to have a metrics class that did this. It'd also be interesting to have some kind of time-based sliding window for statistics like counts and averages. This would let us answer questions like how many RPCs happened in the last 10s? minute? 5 minutes? 10 minutes?. Commutative metrics like counts and averages are easy to aggregate in this fashion. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Reopened] (HADOOP-10052) Temporarily disable client-side symlink resolution
[ https://issues.apache.org/jira/browse/HADOOP-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-10052: -- Re-opening because our internal Jenkins revealed that I missed re-enabling symlinks in a test. Will post an addendum fix shortly. Temporarily disable client-side symlink resolution -- Key: HADOOP-10052 URL: https://issues.apache.org/jira/browse/HADOOP-10052 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: 2.2.0 Reporter: Andrew Wang Assignee: Andrew Wang Fix For: 2.2.1 Attachments: hadoop-10052-1.patch, hadoop-10052-branch-2.2-1.patch, hadoop-10052-branch-2-2-addendum.patch As a follow-on to the JIRA that disabled creation of symlinks on the server-side, we should also disable client-side resolution so old clients talking to a new server behave properly. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HADOOP-10052) Temporarily disable client-side symlink resolution
[ https://issues.apache.org/jira/browse/HADOOP-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-10052. -- Resolution: Fixed Thanks Colin, addendum committed to branch-2.2 only. Temporarily disable client-side symlink resolution -- Key: HADOOP-10052 URL: https://issues.apache.org/jira/browse/HADOOP-10052 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: 2.2.0 Reporter: Andrew Wang Assignee: Andrew Wang Fix For: 2.2.1 Attachments: hadoop-10052-1.patch, hadoop-10052-branch-2.2-1.patch, hadoop-10052-branch-2-2-addendum.patch As a follow-on to the JIRA that disabled creation of symlinks on the server-side, we should also disable client-side resolution so old clients talking to a new server behave properly. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HADOOP-10052) Temporarily disable client-side symlink resolution
Andrew Wang created HADOOP-10052: Summary: Temporarily disable client-side symlink resolution Key: HADOOP-10052 URL: https://issues.apache.org/jira/browse/HADOOP-10052 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.2.0 Reporter: Andrew Wang Assignee: Andrew Wang As a follow-on to the JIRA that disabled creation of symlinks on the server-side, we should also disable client-side resolution so old clients talking to a new server behave properly. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (HADOOP-9761) ViewFileSystem#rename fails when using DistributedFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-9761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-9761. - Resolution: Fixed Fix Version/s: (was: 2.3.0) 2.1.2-beta Committed to branch-2.1 and updated the CHANGES.txt in other branch-2 and trunk. ViewFileSystem#rename fails when using DistributedFileSystem Key: HADOOP-9761 URL: https://issues.apache.org/jira/browse/HADOOP-9761 Project: Hadoop Common Issue Type: Bug Components: viewfs Affects Versions: 3.0.0, 2.1.0-beta Reporter: Andrew Wang Assignee: Andrew Wang Priority: Blocker Fix For: 2.1.2-beta Attachments: 0001-HADOOP-9761.004.patch, hadoop-9761-1.patch, hadoop-9761-1.patch, hadoop-9761-2.patch ViewFileSystem currently passes unqualified paths (no scheme or authority) to underlying FileSystems when doing a rename. DistributedFileSystem symlink support added in HADOOP-9418 needs to qualify and check rename sources and destinations since cross-filesystem renames aren't supported, so this breaks in the following way - Default FS URI is configured to viewfs://viewfs - When doing a rename, ViewFileSystem checks to make sure both src and dst FileSystems are the same (which they are, both in same DFS), and then calls DistributedFileSystem#rename with unqualified remainder paths - Since these paths are unqualified, DFS qualifies them with the default FS to check that it can do the rename. This turns it into viewfs://viewfs/path - Since viewfs://viewfs is not the DFS's URI, DFS errors out the rename. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-9761) ViewFileSystem#rename fails when using DistributedFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-9761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-9761: - Reopening to track backporting this to branch-2.1 or similar for GA. ViewFileSystem#rename fails when using DistributedFileSystem Key: HADOOP-9761 URL: https://issues.apache.org/jira/browse/HADOOP-9761 Project: Hadoop Common Issue Type: Bug Components: viewfs Affects Versions: 3.0.0, 2.1.0-beta Reporter: Andrew Wang Assignee: Andrew Wang Fix For: 2.3.0 Attachments: 0001-HADOOP-9761.004.patch, hadoop-9761-1.patch, hadoop-9761-1.patch, hadoop-9761-2.patch ViewFileSystem currently passes unqualified paths (no scheme or authority) to underlying FileSystems when doing a rename. DistributedFileSystem symlink support added in HADOOP-9418 needs to qualify and check rename sources and destinations since cross-filesystem renames aren't supported, so this breaks in the following way - Default FS URI is configured to viewfs://viewfs - When doing a rename, ViewFileSystem checks to make sure both src and dst FileSystems are the same (which they are, both in same DFS), and then calls DistributedFileSystem#rename with unqualified remainder paths - Since these paths are unqualified, DFS qualifies them with the default FS to check that it can do the rename. This turns it into viewfs://viewfs/path - Since viewfs://viewfs is not the DFS's URI, DFS errors out the rename. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9974) Trunk Build Failure at HDFS Sub-project
[ https://issues.apache.org/jira/browse/HADOOP-9974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-9974. - Resolution: Not A Problem Resolving, please re-open if Steve's fix is insufficient. FWIW, I have not hit this in my Ubuntu dev environment. Trunk Build Failure at HDFS Sub-project --- Key: HADOOP-9974 URL: https://issues.apache.org/jira/browse/HADOOP-9974 Project: Hadoop Common Issue Type: Bug Environment: Mac OS X Reporter: Zhijie Shen Recently Hadoop upgraded to use Protobuf 2.5.0. To build the trunk, I updated my installed Protobuf 2.5.0. With this upgrade, I didn't encounter the build failure due to protoc, but failed when building HDFS sub-project. Bellow is failure message. I'm using Mac OS X. {code} INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main SUCCESS [1.075s] [INFO] Apache Hadoop Project POM . SUCCESS [0.805s] [INFO] Apache Hadoop Annotations . SUCCESS [2.283s] [INFO] Apache Hadoop Assemblies .. SUCCESS [0.343s] [INFO] Apache Hadoop Project Dist POM SUCCESS [1.913s] [INFO] Apache Hadoop Maven Plugins ... SUCCESS [2.390s] [INFO] Apache Hadoop Auth SUCCESS [2.597s] [INFO] Apache Hadoop Auth Examples ... SUCCESS [1.868s] [INFO] Apache Hadoop Common .. SUCCESS [55.798s] [INFO] Apache Hadoop NFS . SUCCESS [3.549s] [INFO] Apache Hadoop MiniKDC . SUCCESS [1.788s] [INFO] Apache Hadoop Common Project .. SUCCESS [0.044s] [INFO] Apache Hadoop HDFS FAILURE [25.219s] [INFO] Apache Hadoop HttpFS .. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED [INFO] Apache Hadoop HDFS-NFS SKIPPED [INFO] Apache Hadoop HDFS Project SKIPPED [INFO] hadoop-yarn ... SKIPPED [INFO] hadoop-yarn-api ... SKIPPED [INFO] hadoop-yarn-common SKIPPED [INFO] hadoop-yarn-server SKIPPED [INFO] hadoop-yarn-server-common . SKIPPED [INFO] hadoop-yarn-server-nodemanager SKIPPED [INFO] hadoop-yarn-server-web-proxy .. SKIPPED [INFO] hadoop-yarn-server-resourcemanager SKIPPED [INFO] hadoop-yarn-server-tests .. SKIPPED [INFO] hadoop-yarn-client SKIPPED [INFO] hadoop-yarn-applications .. SKIPPED [INFO] hadoop-yarn-applications-distributedshell . SKIPPED [INFO] hadoop-mapreduce-client ... SKIPPED [INFO] hadoop-mapreduce-client-core .. SKIPPED [INFO] hadoop-yarn-applications-unmanaged-am-launcher SKIPPED [INFO] hadoop-yarn-site .. SKIPPED [INFO] hadoop-yarn-project ... SKIPPED [INFO] hadoop-mapreduce-client-common SKIPPED [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED [INFO] hadoop-mapreduce-client-app ... SKIPPED [INFO] hadoop-mapreduce-client-hs SKIPPED [INFO] hadoop-mapreduce-client-jobclient . SKIPPED [INFO] hadoop-mapreduce-client-hs-plugins SKIPPED [INFO] Apache Hadoop MapReduce Examples .. SKIPPED [INFO] hadoop-mapreduce .. SKIPPED [INFO] Apache Hadoop MapReduce Streaming . SKIPPED [INFO] Apache Hadoop Distributed Copy SKIPPED [INFO] Apache Hadoop Archives SKIPPED [INFO] Apache Hadoop Rumen ... SKIPPED [INFO] Apache Hadoop Gridmix . SKIPPED [INFO] Apache Hadoop Data Join ... SKIPPED [INFO] Apache Hadoop Extras .. SKIPPED [INFO] Apache Hadoop Pipes ... SKIPPED [INFO] Apache Hadoop Tools Dist .. SKIPPED [INFO] Apache Hadoop Tools ... SKIPPED [INFO] Apache Hadoop Distribution SKIPPED [INFO] Apache Hadoop Client .. SKIPPED [INFO] Apache Hadoop Mini-Cluster SKIPPED [INFO] [INFO] BUILD FAILURE [INFO]
[jira] [Created] (HADOOP-9958) Add old constructor back to DelegationTokenInformation to unbreak downstream builds
Andrew Wang created HADOOP-9958: --- Summary: Add old constructor back to DelegationTokenInformation to unbreak downstream builds Key: HADOOP-9958 URL: https://issues.apache.org/jira/browse/HADOOP-9958 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.1.1-beta Reporter: Andrew Wang Assignee: Andrew Wang HDFS-4680 added an argument to the constructor of DelegationTokenInformation, which is an incompatible change for downstreams. Let's add the old one back in. See: HIVE-5281 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-9877) Fix listing of snapshot directories in globStatus
[ https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-9877: - Hey folks, based on Jason's report in HADOOP-9912, I reverted this change from branch-2 and trunk, since it broke Pig completely. This, of course, means we can no longer list snapshot directories, so this is a temporary fix while we work on fixing the globber more broadly in HADOOP-9912. Fix listing of snapshot directories in globStatus - Key: HADOOP-9877 URL: https://issues.apache.org/jira/browse/HADOOP-9877 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Binglin Chang Assignee: Binglin Chang Fix For: 3.0.0, 2.3.0 Attachments: HADOOP-9877-branch2.patch, HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, HADOOP-9877.v3.patch, HADOOP-9877.v4.patch, HADOOP-9877.v5.patch {code} decster:~/hadoop bin/hadoop fs -ls /foo/.snapshot 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/) 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo) ls: `/foo/.snapshot': No such file or directory {code} HADOOP-9817 refactor some globStatus code, but forgot to handle special case that .snapshot dir is not show up in listStatus but exists, so we need to explicitly check path existence using getFileStatus, rather than depending on listStatus results. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9877) Fix listing of snapshot directories in globStatus
[ https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-9877. - Resolution: Fixed Target Version/s: 2.3.0 (was: 2.1.0-beta) +1 for branch-2 patch, committed. Thanks Binglin for the quick turnaround! Fix listing of snapshot directories in globStatus - Key: HADOOP-9877 URL: https://issues.apache.org/jira/browse/HADOOP-9877 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Binglin Chang Assignee: Binglin Chang Fix For: 3.0.0, 2.3.0 Attachments: HADOOP-9877-branch2.patch, HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, HADOOP-9877.v3.patch, HADOOP-9877.v4.patch, HADOOP-9877.v5.patch {code} decster:~/hadoop bin/hadoop fs -ls /foo/.snapshot 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/) 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo) ls: `/foo/.snapshot': No such file or directory {code} HADOOP-9817 refactor some globStatus code, but forgot to handle special case that .snapshot dir is not show up in listStatus but exists, so we need to explicitly check path existence using getFileStatus, rather than depending on listStatus results. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-9877) Fix listing of snapshot directories in globStatus
[ https://issues.apache.org/jira/browse/HADOOP-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reopened HADOOP-9877: - Hey Binglin, turns out this doesn't apply cleanly to branch-2 (had to revert it out). Do you mind submitting a branch-2 version as well? Fix listing of snapshot directories in globStatus - Key: HADOOP-9877 URL: https://issues.apache.org/jira/browse/HADOOP-9877 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Binglin Chang Assignee: Binglin Chang Fix For: 3.0.0, 2.3.0 Attachments: HADOOP-9877.v1.patch, HADOOP-9877.v2.patch, HADOOP-9877.v3.patch, HADOOP-9877.v4.patch, HADOOP-9877.v5.patch {code} decster:~/hadoop bin/hadoop fs -ls /foo/.snapshot 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/) 13/08/16 01:17:22 INFO hdfs.DFSClient: + listPath(/foo) ls: `/foo/.snapshot': No such file or directory {code} HADOOP-9817 refactor some globStatus code, but forgot to handle special case that .snapshot dir is not show up in listStatus but exists, so we need to explicitly check path existence using getFileStatus, rather than depending on listStatus results. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9847) TestGlobPath symlink tests fail to cleanup properly
Andrew Wang created HADOOP-9847: --- Summary: TestGlobPath symlink tests fail to cleanup properly Key: HADOOP-9847 URL: https://issues.apache.org/jira/browse/HADOOP-9847 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Andrew Wang Priority: Minor On our internal trunk Jenkins runs, I've seen failures like the following: {noformat} Error Message: Cannot delete /user/jenkins. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use hdfs dfsadmin -safemode leave to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025) Stack Trace: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /user/jenkins. Name node is in safe mode. Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use hdfs dfsadmin -safemode leave to turn safe mode off. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.Client.call(Client.java:1352) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at $Proxy15.delete(Unknown Source) at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101) at $Proxy15.delete(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:449) at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1575) at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:585) at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:581)
[jira] [Created] (HADOOP-9818) Remove usage of bash -c from oah.fs.DF
Andrew Wang created HADOOP-9818: --- Summary: Remove usage of bash -c from oah.fs.DF Key: HADOOP-9818 URL: https://issues.apache.org/jira/browse/HADOOP-9818 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.1.0-beta Reporter: Andrew Wang {{DF}} uses bash -c to shell out to the unix {{df}} command. This is potentially unsafe; let's think about removing it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9788) Add sticky bit support to org.apache.hadoop.fs.Stat
Andrew Wang created HADOOP-9788: --- Summary: Add sticky bit support to org.apache.hadoop.fs.Stat Key: HADOOP-9788 URL: https://issues.apache.org/jira/browse/HADOOP-9788 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Andrew Wang Assignee: Andrew Wang This is currently breaking DFSShell tests in HDFS. Right now the Stat class is truncating off the octal digit with the sticky bit, let's add support for it back in on supported platforms. This digit also has suid (s) and special execute (X) bits, so don't ignore those either. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9788) Add sticky bit support to org.apache.hadoop.fs.Stat
[ https://issues.apache.org/jira/browse/HADOOP-9788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-9788. - Resolution: Duplicate Duping it to HADOOP-9652 since we reverted it and want to just fix all these issues there in a new patch. Add sticky bit support to org.apache.hadoop.fs.Stat --- Key: HADOOP-9788 URL: https://issues.apache.org/jira/browse/HADOOP-9788 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Andrew Wang Assignee: Andrew Wang This is currently breaking DFSShell tests in HDFS. Right now the Stat class is truncating off the octal digit with the sticky bit, let's add support for it back in on supported platforms. This digit also has suid (s) and special execute (X) bits, so don't ignore those either. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9783) Fix OS detection for RawLocalFileSystem#getFileLinkStatus fallback path
Andrew Wang created HADOOP-9783: --- Summary: Fix OS detection for RawLocalFileSystem#getFileLinkStatus fallback path Key: HADOOP-9783 URL: https://issues.apache.org/jira/browse/HADOOP-9783 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.3.0 Reporter: Andrew Wang Assignee: Andrew Wang HADOOP-9652 calls out to {{stat(1)}} on supported platforms to get additional file metadata and handle symlinks. The old, incorrect fallback path was left in for unsupported platforms. However, the fallback is currently only in place for Windows; let's make it for any unsupported platform (e.g. Mac). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9761) ViewFileSystem#rename fails when using DistributedFileSystem
Andrew Wang created HADOOP-9761: --- Summary: ViewFileSystem#rename fails when using DistributedFileSystem Key: HADOOP-9761 URL: https://issues.apache.org/jira/browse/HADOOP-9761 Project: Hadoop Common Issue Type: Bug Components: viewfs Affects Versions: 3.0.0, 2.1.0-beta Reporter: Andrew Wang Assignee: Andrew Wang ViewFileSystem currently passes unqualified paths (no scheme or authority) to underlying FileSystems when doing a rename. DistributedFileSystem symlink support added in HADOOP-9418 needs to qualify and check rename sources and destinations since cross-filesystem renames aren't supported, so this breaks in the following way - Default FS URI is configured to viewfs://viewfs - When doing a rename, ViewFileSystem checks to make sure both src and dst FileSystems are the same (which they are, both in same DFS), and then calls DistributedFileSystem#rename with unqualified remainder paths - Since these paths are unqualified, DFS qualifies them with the default FS to check that it can do the rename. This turns it into viewfs://viewfs/path - Since viewfs://viewfs is not the DFS's URI, DFS errors out the rename. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-7905) Port FileContext symlinks to FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-7905. - Resolution: Duplicate Resolving as dupe of HADOOP-8040 subtasks HADOOP-9414 and HADOOP-9416. Port FileContext symlinks to FileSystem --- Key: HADOOP-7905 URL: https://issues.apache.org/jira/browse/HADOOP-7905 Project: Hadoop Common Issue Type: New Feature Components: fs Reporter: Eli Collins FileSystem isn't going away anytime soon (HADOOP-6446). It would be useful to implement HADOOP-6421 for FileSystem, this would allow interoperability between FileContext and FileSystem (eg currently a symlink created via FileContext is not readable via FileSystem), which will help people migrate to FileContext. The work is mostly moving the client-side link resolution code to a shared place and porting the tests or modifying them to be FC/FS agnostic. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9414) Refactor out FSLinkResolver and relevant helper methods
Andrew Wang created HADOOP-9414: --- Summary: Refactor out FSLinkResolver and relevant helper methods Key: HADOOP-9414 URL: https://issues.apache.org/jira/browse/HADOOP-9414 Project: Hadoop Common Issue Type: Sub-task Reporter: Andrew Wang Can reuse the existing FsLinkResolver within FileContext for FileSystem as well. Also move around / pull out some other reusable functions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9415) Fix NullPointerException in getLinkTarget
Andrew Wang created HADOOP-9415: --- Summary: Fix NullPointerException in getLinkTarget Key: HADOOP-9415 URL: https://issues.apache.org/jira/browse/HADOOP-9415 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Andrew Wang Priority: Minor {{HdfsFileStatus#getLinkTarget}} can throw a NPE in {{DFSUtil#bytes2String}} if {{symlink}} is null. Better to instead return null and propagate this to the client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9416) Add new symlink resolution methods to FileSystem and FSLinkResolver
Andrew Wang created HADOOP-9416: --- Summary: Add new symlink resolution methods to FileSystem and FSLinkResolver Key: HADOOP-9416 URL: https://issues.apache.org/jira/browse/HADOOP-9416 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Andrew Wang Add new methods for symlink resolution to FileSystem, and add resolution support for FileSystem to FSLinkResolver. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9417) Support for symlink resolution in LocalFileSystem / RawLocalFileSystem
Andrew Wang created HADOOP-9417: --- Summary: Support for symlink resolution in LocalFileSystem / RawLocalFileSystem Key: HADOOP-9417 URL: https://issues.apache.org/jira/browse/HADOOP-9417 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang Add symlink resolution support to LocalFileSystem/RawLocalFileSystem as well as tests. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9418) Add symlink resolution support to DistributedFileSystem
Andrew Wang created HADOOP-9418: --- Summary: Add symlink resolution support to DistributedFileSystem Key: HADOOP-9418 URL: https://issues.apache.org/jira/browse/HADOOP-9418 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang Add symlink resolution support to DistributedFileSystem as well as tests. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9394) Port findHangingTest.sh from HBase to Hadoop
Andrew Wang created HADOOP-9394: --- Summary: Port findHangingTest.sh from HBase to Hadoop Key: HADOOP-9394 URL: https://issues.apache.org/jira/browse/HADOOP-9394 Project: Hadoop Common Issue Type: Improvement Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor HBase has this handy {{dev-support/findHangingTests.sh}} script, which parses Jenkins consoleText and finds hanging tests for you. This has been especially useful for identifying balancer test timeouts (see HDFS-4376 and HDFS-4261). It'd be nice to have this in our own {{dev-support}} directory. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9355) Abstract symlink tests to use either FileContext or FileSystem
Andrew Wang created HADOOP-9355: --- Summary: Abstract symlink tests to use either FileContext or FileSystem Key: HADOOP-9355 URL: https://issues.apache.org/jira/browse/HADOOP-9355 Project: Hadoop Common Issue Type: Sub-task Reporter: Andrew Wang We'd like to run the symlink tests using both FileContext and the upcoming FileSystem implementation. The first step here is abstracting the test logic to run on an abstract filesystem implementation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9267) hadoop -help, -h, --help should show usage instructions
Andrew Wang created HADOOP-9267: --- Summary: hadoop -help, -h, --help should show usage instructions Key: HADOOP-9267 URL: https://issues.apache.org/jira/browse/HADOOP-9267 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor It's not friendly for new users when the command line scripts don't show usage instructions when passed the defacto Unix usage flags. Imagine this sequence of commands: {noformat} - % hadoop --help Error: No command named `--help' was found. Perhaps you meant `hadoop -help' - % hadoop -help Error: No command named `-help' was found. Perhaps you meant `hadoop help' - % hadoop help Exception in thread main java.lang.NoClassDefFoundError: help Caused by: java.lang.ClassNotFoundException: help at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: help. Program will exit. {noformat} Same applies for the `hdfs` script. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8541) Better high-percentile latency metrics
Andrew Wang created HADOOP-8541: --- Summary: Better high-percentile latency metrics Key: HADOOP-8541 URL: https://issues.apache.org/jira/browse/HADOOP-8541 Project: Hadoop Common Issue Type: Improvement Components: metrics Reporter: Andrew Wang Based on discussion in HBASE-6261 and with some HDFS devs, I'd like to make better high-percentile latency metrics a part of hadoop-common. I've already got a working implementation of [1], an efficient algorithm for estimating quantiles on a stream of values. It allows you to specify arbitrary quantiles to track (e.g. 50th, 75th, 90th, 95th, 99th), along with tight error bounds. This estimator can be snapshotted and reset periodically to get a feel for how these percentiles are changing over time. I propose creating a new MutableQuantiles class that does this. [1] isn't completely without overhead (~1MB memory for reasonably sized windows), which is why I hesitate to add it to the existing MutableStat class. [1] Cormode, Korn, Muthukrishnan, and Srivastava. Effective Computation of Biased Quantiles over Data Streams in ICDE 2005. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira