[jira] [Created] (HDFS-5144) Document time unit to NameNodeMetrics.java
Akira AJISAKA created HDFS-5144: --- Summary: Document time unit to NameNodeMetrics.java Key: HDFS-5144 URL: https://issues.apache.org/jira/browse/HDFS-5144 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Affects Versions: 2.1.0-beta Reporter: Akira AJISAKA Priority: Minor in o.a.h.hdfs.server.namenode.metrics.NameNodeMetrics.java, metrics are declared as follows: {code} @Metric(Duration in SafeMode at startup) MutableGaugeInt safeModeTime; @Metric(Time loading FS Image at startup) MutableGaugeInt fsImageLoadTime; {code} Since some users may confuse which unit (sec or msec) is correct, they should be documented. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-5156) SafeModeTime metrics sometimes includes non-Safemode time.
Akira AJISAKA created HDFS-5156: --- Summary: SafeModeTime metrics sometimes includes non-Safemode time. Key: HDFS-5156 URL: https://issues.apache.org/jira/browse/HDFS-5156 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.1.0-beta Reporter: Akira AJISAKA SafeModeTime metrics shows duration in safe mode startup. However, this metrics is set to the time from FSNameSystem starts whenever safe mode leaves. In the result, executing hdfs dfsadmin -safemode enter and hdfs dfsadmin -safemode leave, the metrics includes non-Safemode time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-5165) FSNameSystem TotalFiles and FilesTotal metrics are the same
Akira AJISAKA created HDFS-5165: --- Summary: FSNameSystem TotalFiles and FilesTotal metrics are the same Key: HDFS-5165 URL: https://issues.apache.org/jira/browse/HDFS-5165 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Priority: Minor Both FSNameSystem TotalFiles and FilesTotal metrics mean total files/dirs in the cluster. One of these metrics should be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-5297) Fix broken hyperlinks in HDFS document
Akira AJISAKA created HDFS-5297: --- Summary: Fix broken hyperlinks in HDFS document Key: HDFS-5297 URL: https://issues.apache.org/jira/browse/HDFS-5297 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 2.1.0-beta, 3.0.0 Reporter: Akira AJISAKA Priority: Minor Fix For: 3.0.0, 2.1.2-beta I found a lot of broken hyperlinks in HDFS document to be fixed. Ex.) In HdfsUserGuide.apt.vm, there is an broken hyperlinks as below {noformat} For command usage, see {{{dfsadmin}}}. {noformat} It should be fixed to {noformat} For command usage, see {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}. {noformat} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HDFS-5336) DataNode should not output 'StartupProgress' metrics
Akira AJISAKA created HDFS-5336: --- Summary: DataNode should not output 'StartupProgress' metrics Key: HDFS-5336 URL: https://issues.apache.org/jira/browse/HDFS-5336 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.1.0-beta Environment: trunk Reporter: Akira AJISAKA Priority: Minor I found the following metrics output from DataNode. {code} 1381355455731 default.StartupProgress: Hostname=trunk, ElapsedTime=0, PercentComplete=0.0, LoadingFsImageCount=0, LoadingFsImageElapsedTime=0, LoadingFsImageTotal=0, LoadingFsImagePercentComplete=0.0, LoadingEditsCount=0, LoadingEditsElapsedTime=0, LoadingEditsTotal=0, LoadingEditsPercentComplete=0.0, SavingCheckpointCount=0, SavingCheckpointElapsedTime=0, SavingCheckpointTotal=0, SavingCheckpointPercentComplete=0.0, SafeModeCount=0, SafeModeElapsedTime=0, SafeModeTotal=0, SafeModePercentComplete=0.0 {code} DataNode should not output 'StartupProgress' metrics because the metrics shows the progress of NameNode startup. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HDFS-5361) Change the unit of StartupProgress 'PercentComplete' to percentage
Akira AJISAKA created HDFS-5361: --- Summary: Change the unit of StartupProgress 'PercentComplete' to percentage Key: HDFS-5361 URL: https://issues.apache.org/jira/browse/HDFS-5361 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.1.0-beta Reporter: Akira AJISAKA Priority: Minor Now the unit of 'PercentComplete' metrics is rate (maximum is 1.0). It's confusing for users because its name includes percent. The metrics should be multiplied by 100. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HDFS-5492) Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk
Akira AJISAKA created HDFS-5492: --- Summary: Port HDFS-2069 (Incorrect default trash interval in the docs) to trunk Key: HDFS-5492 URL: https://issues.apache.org/jira/browse/HDFS-5492 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.2.0 Reporter: Akira AJISAKA Priority: Minor HDFS-2069 is not ported to current document. The description of HDFS-2069 is as follows: {quote} Current HDFS architecture information about Trash is incorrectly documented as - The current default policy is to delete files from /trash that are more than 6 hours old. In the future, this policy will be configurable through a well defined interface. It should be something like - Current default trash interval is set to 0 (Deletes file without storing in trash ) . This value is configurable parameter stored as fs.trash.interval stored in core-site.xml . {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HDFS-5562) TestCacheDirectives fails on trunk
Akira AJISAKA created HDFS-5562: --- Summary: TestCacheDirectives fails on trunk Key: HDFS-5562 URL: https://issues.apache.org/jira/browse/HDFS-5562 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 3.0.0 Reporter: Akira AJISAKA Some tests fail on trunk. {code} Tests in error: TestCacheDirectives.testWaitForCachedReplicas:710 » Runtime Cannot start datan... TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled:767 » Runtime TestCacheDirectives.testWaitForCachedReplicasInDirectory:813 » Runtime Cannot ... TestCacheDirectives.testReplicationFactor:897 » Runtime Cannot start datanode ... Tests run: 9, Failures: 0, Errors: 4, Skipped: 0 {code} For more details, see https://builds.apache.org/job/Hadoop-Hdfs-trunk/1592/ -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HDFS-5691) Fix typo in ShortCircuitLocalRead document
Akira AJISAKA created HDFS-5691: --- Summary: Fix typo in ShortCircuitLocalRead document Key: HDFS-5691 URL: https://issues.apache.org/jira/browse/HDFS-5691 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 2.2.0 Reporter: Akira AJISAKA Priority: Minor There's a misspelled parameter in ShortCircuitLocalReads.apt.vm . {code} * dfs.client.read.shortcircuit.skip.checkusm {code} It should be fixed as follows: {code} * dfs.client.read.shortcircuit.skip.checksum {code} -- This message was sent by Atlassian JIRA (v6.1.4#6159)
[jira] [Created] (HDFS-5778) Document new commands and parameters for improved rolling upgrades
Akira AJISAKA created HDFS-5778: --- Summary: Document new commands and parameters for improved rolling upgrades Key: HDFS-5778 URL: https://issues.apache.org/jira/browse/HDFS-5778 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: HDFS-5535 (Rolling upgrades) Reporter: Akira AJISAKA hdfs dfsadmin -rollingUpgrade command was newly added in HDFS-5752, and some other commands and parameters will be added in the future. They should be documented before merging to trunk. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5853) Add hadoop.user.group.metrics.percentiles.intervals to hdfs-default.xml
Akira AJISAKA created HDFS-5853: --- Summary: Add hadoop.user.group.metrics.percentiles.intervals to hdfs-default.xml Key: HDFS-5853 URL: https://issues.apache.org/jira/browse/HDFS-5853 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.3.0 Reporter: Akira AJISAKA hadoop.user.group.metrics.percentiles.intervals was added in HDFS-5220, but the parameter is not written in hdfs-default.xml. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5863) Improve OfflineImageViewer
Akira AJISAKA created HDFS-5863: --- Summary: Improve OfflineImageViewer Key: HDFS-5863 URL: https://issues.apache.org/jira/browse/HDFS-5863 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.2.0 Reporter: Akira AJISAKA This is an umbrella jira for improving Offline Image Viewer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5864) Missing '\n' in the output of 'hdfs oiv --help'
Akira AJISAKA created HDFS-5864: --- Summary: Missing '\n' in the output of 'hdfs oiv --help' Key: HDFS-5864 URL: https://issues.apache.org/jira/browse/HDFS-5864 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.2.0 Reporter: Akira AJISAKA Priority: Trivial In OfflineImageViewer.java, {code} * NameDistribution: This processor analyzes the file names\n + in the image and prints total number of file names and how frequently + file names are reused.\n + {code} should be {code} * NameDistribution: This processor analyzes the file names\n + in the image and prints total number of file names and how frequently\n + file names are reused.\n + {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5865) Document some arguments in 'hdfs oiv --processor' option
Akira AJISAKA created HDFS-5865: --- Summary: Document some arguments in 'hdfs oiv --processor' option Key: HDFS-5865 URL: https://issues.apache.org/jira/browse/HDFS-5865 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: 2.2.0 Reporter: Akira AJISAKA The Offline Image Viewer document now describes Currently valid options are {{Ls}}, {{XML}}, and {{Indented}} in {{--processor}} option, but there're more options such as {{Delimited}}, {{FileDistribution}}, and {{NameDistribution}}. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5866) '-maxSize' and '-step' option fail in OfflineImageViewer
Akira AJISAKA created HDFS-5866: --- Summary: '-maxSize' and '-step' option fail in OfflineImageViewer Key: HDFS-5866 URL: https://issues.apache.org/jira/browse/HDFS-5866 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 2.2.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Executing -step or/and -maxSize option will get the following error: {code} $ hdfs oiv -p FileDistribution -step 102400 -i input -o output Error parsing command-line options: Usage: bin/hdfs oiv [OPTIONS] -i INPUTFILE -o OUTPUTFILE {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5867) Clean up the output of NameDistribution processor
Akira AJISAKA created HDFS-5867: --- Summary: Clean up the output of NameDistribution processor Key: HDFS-5867 URL: https://issues.apache.org/jira/browse/HDFS-5867 Project: Hadoop HDFS Issue Type: Sub-task Environment: The output of 'hdfs oiv -i INPUT -o OUTPUT -p NameDistribution' is as follows: {code} Total unique file names 86 0 names are used by 0 files between 10-13 times. Heap savings ~0 bytes. 0 names are used by 0 files between 1-9 times. Heap savings ~0 bytes. 0 names are used by 0 files between 1000- times. Heap savings ~0 bytes. 0 names are used by 0 files between 100-999 times. Heap savings ~0 bytes. 1 names are used by 13 files between 10-99 times. Heap savings ~372 bytes. 4 names are used by 34 files between 5-9 times. Heap savings ~942 bytes. 2 names are used by 8 files 4 times. Heap savings ~192 bytes. 0 names are used by 0 files 3 times. Heap savings ~0 bytes. 7 names are used by 14 files 2 times. Heap savings ~222 bytes. Total saved heap ~1728bytes. {code} 'between 10-13 times' should be 'over 9 times' , or the line starting with '0 names' should not output. Reporter: Akira AJISAKA Priority: Minor -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5880) Fix a typo at the title of HDFS Snapshots document
Akira AJISAKA created HDFS-5880: --- Summary: Fix a typo at the title of HDFS Snapshots document Key: HDFS-5880 URL: https://issues.apache.org/jira/browse/HDFS-5880 Project: Hadoop HDFS Issue Type: Bug Components: documentation, snapshots Affects Versions: 2.2.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Minor The title of the HDFS Snapshots document is HFDS Snapshots. We should fix it. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HDFS-5864) Missing '\n' in the output of 'hdfs oiv --help'
[ https://issues.apache.org/jira/browse/HDFS-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5864. - Resolution: Cannot Reproduce Target Version/s: (was: 2.4.0) I cannot reproduce this after HDFS-5698 branch was merged. Missing '\n' in the output of 'hdfs oiv --help' --- Key: HDFS-5864 URL: https://issues.apache.org/jira/browse/HDFS-5864 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.2.0 Reporter: Akira AJISAKA Priority: Trivial Labels: newbie In OfflineImageViewer.java, {code} * NameDistribution: This processor analyzes the file names\n + in the image and prints total number of file names and how frequently + file names are reused.\n + {code} should be {code} * NameDistribution: This processor analyzes the file names\n + in the image and prints total number of file names and how frequently\n + file names are reused.\n + {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HDFS-5867) Clean up the output of NameDistribution processor
[ https://issues.apache.org/jira/browse/HDFS-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5867. - Resolution: Cannot Reproduce Target Version/s: (was: 2.4.0) NameDistribution processor is not supported after HDFS-5698 branch was merged. Closing this issue. Clean up the output of NameDistribution processor - Key: HDFS-5867 URL: https://issues.apache.org/jira/browse/HDFS-5867 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Reporter: Akira AJISAKA Priority: Minor Labels: newbie The output of 'hdfs oiv -i INPUT -o OUTPUT -p NameDistribution' is as follows: {code} Total unique file names 86 0 names are used by 0 files between 10-13 times. Heap savings ~0 bytes. 0 names are used by 0 files between 1-9 times. Heap savings ~0 bytes. 0 names are used by 0 files between 1000- times. Heap savings ~0 bytes. 0 names are used by 0 files between 100-999 times. Heap savings ~0 bytes. 1 names are used by 13 files between 10-99 times. Heap savings ~372 bytes. 4 names are used by 34 files between 5-9 times. Heap savings ~942 bytes. 2 names are used by 8 files 4 times. Heap savings ~192 bytes. 0 names are used by 0 files 3 times. Heap savings ~0 bytes. 7 names are used by 14 files 2 times. Heap savings ~222 bytes. Total saved heap ~1728bytes. {code} 'between 10-13 times' should be 'over 9 times' , or the line starting with '0 names' should not output. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5942) Fix javadoc in OfflineImageViewer
Akira AJISAKA created HDFS-5942: --- Summary: Fix javadoc in OfflineImageViewer Key: HDFS-5942 URL: https://issues.apache.org/jira/browse/HDFS-5942 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 3.0.0 Reporter: Akira AJISAKA Priority: Minor The descriptions of PBImageXmlWriter.java and LsrPBImage.java are as follows: {code} /** * This is the tool for analyzing file sizes in the namespace image. In order to * run the tool one should define a range of integers tt[0, maxSize]/tt by * specifying ttmaxSize/tt and a ttstep/tt. The range of integers is * divided into segments of size ttstep/tt: ... skip {code} which is the same as the description of FileDistributionCalculator.java -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5952) Implement delimited processor in OfflineImageViewer
Akira AJISAKA created HDFS-5952: --- Summary: Implement delimited processor in OfflineImageViewer Key: HDFS-5952 URL: https://issues.apache.org/jira/browse/HDFS-5952 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 3.0.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Delimited processor is not supported after HDFS-5698 was merged. The processor is useful for analyzing the output by scripts such as pig. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5956) A file size is multiplied by the replication factor in 'hdfs oiv -p FileDistribution' option
Akira AJISAKA created HDFS-5956: --- Summary: A file size is multiplied by the replication factor in 'hdfs oiv -p FileDistribution' option Key: HDFS-5956 URL: https://issues.apache.org/jira/browse/HDFS-5956 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 3.0.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA In FileDistributionCalculator.java, {code} long fileSize = 0; for (BlockProto b : f.getBlocksList()) { fileSize += b.getNumBytes() * f.getReplication(); } maxFileSize = Math.max(fileSize, maxFileSize); totalSpace += fileSize; {code} should be {code} long fileSize = 0; for (BlockProto b : f.getBlocksList()) { fileSize += b.getNumBytes(); } maxFileSize = Math.max(fileSize, maxFileSize); totalSpace += fileSize * f.getReplication(); {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5959) Fix typo at section name in FSImageFormatProtobuf.java
Akira AJISAKA created HDFS-5959: --- Summary: Fix typo at section name in FSImageFormatProtobuf.java Key: HDFS-5959 URL: https://issues.apache.org/jira/browse/HDFS-5959 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira AJISAKA Priority: Minor There's a typo REFRENCE {code} public enum SectionName { NS_INFO(NS_INFO), STRING_TABLE(STRING_TABLE), INODE(INODE), INODE_REFRENCE(INODE_REFRENCE), SNAPSHOT(SNAPSHOT), {code} should be REFERENCE. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5975) Create an option to specify a file path for OfflineImageViewer
Akira AJISAKA created HDFS-5975: --- Summary: Create an option to specify a file path for OfflineImageViewer Key: HDFS-5975 URL: https://issues.apache.org/jira/browse/HDFS-5975 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Minor The output of OfflineImageViewer becomes quite large if an input fsimage is large. I propose '-filePath' option to make the output smaller. The below command will output the {{ls -R}} of {{/user/root}}. {code} hdfs oiv -i input -o output -p Ls -filePath /user/root {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5978) Create a tool to take fsimage and expose read-only WebHDFS API
Akira AJISAKA created HDFS-5978: --- Summary: Create a tool to take fsimage and expose read-only WebHDFS API Key: HDFS-5978 URL: https://issues.apache.org/jira/browse/HDFS-5978 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Reporter: Akira AJISAKA Suggested in HDFS-5975. Add an option to exposes the read-only version of WebHDFS API for OfflineImageViewer. You can imagine it looks very similar to jhat. That way we can allow the operator to use the existing command-line tool, or even the web UI to debug the fsimage. It also allows the operator to interactively browsing the file system, figuring out what goes wrong. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5990) Create options to search files/dirs in OfflineImageViewer
Akira AJISAKA created HDFS-5990: --- Summary: Create options to search files/dirs in OfflineImageViewer Key: HDFS-5990 URL: https://issues.apache.org/jira/browse/HDFS-5990 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Akira AJISAKA Priority: Minor The enhancement of HDFS-5975. I suggest options to search files/dirs in OfflineImageViewer. An example command is as follows: {code} hdfs oiv -i input -o output -p Ls -owner theuser -group supergroup -minSize 1024 -maxSize 1048576 {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-5991) TestLoadGenerator#testLoadGenerator fails on trunk
Akira AJISAKA created HDFS-5991: --- Summary: TestLoadGenerator#testLoadGenerator fails on trunk Key: HDFS-5991 URL: https://issues.apache.org/jira/browse/HDFS-5991 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira AJISAKA From https://builds.apache.org/job/PreCommit-HDFS-Build/6194//testReport/ {code} java.io.IOException: Stream closed at java.io.BufferedReader.ensureOpen(BufferedReader.java:97) at java.io.BufferedReader.readLine(BufferedReader.java:292) at java.io.BufferedReader.readLine(BufferedReader.java:362) at org.apache.hadoop.fs.loadGenerator.LoadGenerator.loadScriptFile(LoadGenerator.java:511) at org.apache.hadoop.fs.loadGenerator.LoadGenerator.init(LoadGenerator.java:418) at org.apache.hadoop.fs.loadGenerator.LoadGenerator.run(LoadGenerator.java:324) at org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.testLoadGenerator(TestLoadGenerator.java:231) {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-6006) Remove duplicating code in FSNameSystem#getFileInfo
Akira AJISAKA created HDFS-6006: --- Summary: Remove duplicating code in FSNameSystem#getFileInfo Key: HDFS-6006 URL: https://issues.apache.org/jira/browse/HDFS-6006 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.3.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Trivial In FSNameSystem#getFileInfo, it checks src file name two times. {code} if (!DFSUtil.isValidName(src)) { throw new InvalidPathException(Invalid file name: + src); } HdfsFileStatus stat = null; FSPermissionChecker pc = getPermissionChecker(); checkOperation(OperationCategory.READ); if (!DFSUtil.isValidName(src)) { throw new InvalidPathException(Invalid file name: + src); } {code} The latter should be removed. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HDFS-6048) DFSClient fails if native library doesn't exist
Akira AJISAKA created HDFS-6048: --- Summary: DFSClient fails if native library doesn't exist Key: HDFS-6048 URL: https://issues.apache.org/jira/browse/HDFS-6048 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0, 2.4.0 Reporter: Akira AJISAKA Priority: Blocker When I executed FSShell commands (such as hdfs dfs -ls, -mkdir, -cat) in trunk, {{UnsupportedOperationException}} occurred in {{o.a.h.net.unix.DomainSocketWatcher}} and the commands failed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6073) NameNodeResourceChecker prints 'null' mount point to the log
Akira AJISAKA created HDFS-6073: --- Summary: NameNodeResourceChecker prints 'null' mount point to the log Key: HDFS-6073 URL: https://issues.apache.org/jira/browse/HDFS-6073 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.3.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA If the available space on the volume used for saving fsimage is less than 100MB (default), NameNodeResourceChecker prints the log as follows: {code} Space available on volume 'null' is 92274688, which is below the configured reserved amount 104857600 {code} It should print an appropriate mount point instead of null. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6090) Use MiniDFSCluster.Builder instead of deprecated constructors
Akira AJISAKA created HDFS-6090: --- Summary: Use MiniDFSCluster.Builder instead of deprecated constructors Key: HDFS-6090 URL: https://issues.apache.org/jira/browse/HDFS-6090 Project: Hadoop HDFS Issue Type: Improvement Components: test Reporter: Akira AJISAKA Priority: Minor Some test classes are using deprecated constructors such as {{MiniDFSCluster(Configuration, int, boolean, String[], String[])}} for building a MiniDFSCluster. These classes should use {{MiniDFSCluster.Builder}} to reduce javac warnings and improve code readability. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-5997) TestHASafeMode#testBlocksAddedWhileStandbyIsDown fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5997. - Resolution: Duplicate TestHASafeMode#testBlocksAddedWhileStandbyIsDown fails in trunk --- Key: HDFS-5997 URL: https://issues.apache.org/jira/browse/HDFS-5997 Project: Hadoop HDFS Issue Type: Bug Reporter: Ted Yu From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1681/ : REGRESSION: org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode.testBlocksAddedWhileStandbyIsDown Error Message: {code} Bad safemode status: 'Safe mode is ON. The reported blocks 7 has reached the threshold 0.9990 of total blocks 6. The number of live datanodes 3 has reached the minimum number 0. Safe mode will be turned off automatically in 28 seconds.' {code} Stack Trace: {code} java.lang.AssertionError: Bad safemode status: 'Safe mode is ON. The reported blocks 7 has reached the threshold 0.9990 of total blocks 6. The number of live datanodes 3 has reached the minimum number 0. Safe mode will be turned off automatically in 28 seconds.' at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode.assertSafeMode(TestHASafeMode.java:493) at org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode.testBlocksAddedWhileStandbyIsDown(TestHASafeMode.java:660) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-6104) TestFsLimits#testDefaultMaxComponentLength Fails on branch-2
[ https://issues.apache.org/jira/browse/HDFS-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-6104. - Resolution: Invalid Assignee: (was: Mit Desai) Closing this issue because the test was removed by HDFS-6102. TestFsLimits#testDefaultMaxComponentLength Fails on branch-2 Key: HDFS-6104 URL: https://issues.apache.org/jira/browse/HDFS-6104 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.0 Reporter: Mit Desai Labels: java7 testDefaultMaxComponentLength fails intermittently with the following error {noformat} java.lang.AssertionError: expected:0 but was:255 at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.hadoop.hdfs.server.namenode.TestFsLimits.testDefaultMaxComponentLength(TestFsLimits.java:90) {noformat} On doing some research, I found that this is actually a JDK7 issue. The test always fails when it runs after any test that runs addChildWithName() method -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6153) Add fileId and childrenNum fields to Json schema in the WebHDFS document
Akira AJISAKA created HDFS-6153: --- Summary: Add fileId and childrenNum fields to Json schema in the WebHDFS document Key: HDFS-6153 URL: https://issues.apache.org/jira/browse/HDFS-6153 Project: Hadoop HDFS Issue Type: Bug Components: documentation, webhdfs Affects Versions: 2.3.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Minor Now WebHDFS returns FileStatus Json objects include fileId and childrenNum fields but these fields are not documented. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6169) WebImageViewer should support recursive liststatus operation.
Akira AJISAKA created HDFS-6169: --- Summary: WebImageViewer should support recursive liststatus operation. Key: HDFS-6169 URL: https://issues.apache.org/jira/browse/HDFS-6169 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Lsr processor was removed from OfflineImageViewer by HDFS-6164 but the Web processor (WebImageViewer) doesn't support recursive {{LISTSTATUS}} operation. Users now need to query all the directories to get the information of all the files/dirs in a fsimage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6173) Move the default processor from Ls to Web in OfflineImageViewer
Akira AJISAKA created HDFS-6173: --- Summary: Move the default processor from Ls to Web in OfflineImageViewer Key: HDFS-6173 URL: https://issues.apache.org/jira/browse/HDFS-6173 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA {code} String processor = cmd.getOptionValue(p, Ls); {code} HDFS-6164 removed {{Ls}} processor from {{OfflineImageViewer}} but the default processor is set to {{Ls}}. The default should be set to {{Web}}. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-5990) Create options to search files/dirs in OfflineImageViewer
[ https://issues.apache.org/jira/browse/HDFS-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5990. - Resolution: Invalid Assignee: (was: Akira AJISAKA) Create options to search files/dirs in OfflineImageViewer - Key: HDFS-5990 URL: https://issues.apache.org/jira/browse/HDFS-5990 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA Add some query options to liststatus operation in WebImageViewer to search files/dirs in a fsimage. An example query is as follows: {code} curl -i http://localhost:5978/?op=liststatusowner=rootgroup=supergroupminsize=1maxsize=1048576recursive=true {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer
Akira AJISAKA created HDFS-6210: --- Summary: Support GETACLSTATUS operation in WebImageViewer Key: HDFS-6210 URL: https://issues.apache.org/jira/browse/HDFS-6210 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to execute hdfs dfs -ls to WebImageViewer. {code} [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/ 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 1 items ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad Request {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6240) WebImageViewer returns 404 if LISTSTATUS to an empty directory
Akira AJISAKA created HDFS-6240: --- Summary: WebImageViewer returns 404 if LISTSTATUS to an empty directory Key: HDFS-6240 URL: https://issues.apache.org/jira/browse/HDFS-6240 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA {{WebImageViewer}} returns 404 (not found) if {{LISTSTATUS}} to an empty directory. It should return 200 (ok) and empty FileStatuses. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6249) Output AclEntry in PBImageXmlWriter
Akira AJISAKA created HDFS-6249: --- Summary: Output AclEntry in PBImageXmlWriter Key: HDFS-6249 URL: https://issues.apache.org/jira/browse/HDFS-6249 Project: Hadoop HDFS Issue Type: Sub-task Components: tools Affects Versions: 2.4.0 Reporter: Akira AJISAKA Priority: Minor It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6256) Clean up ImageVisitor and SpotCheckImageVisitor
Akira AJISAKA created HDFS-6256: --- Summary: Clean up ImageVisitor and SpotCheckImageVisitor Key: HDFS-6256 URL: https://issues.apache.org/jira/browse/HDFS-6256 Project: Hadoop HDFS Issue Type: Improvement Components: tools Reporter: Akira AJISAKA Assignee: Akira AJISAKA Dead code in OfflineImageViewer was removed by HDFS-6158, but {{ImageVisitor.java}} and {{SpotCheckImageVisitor.java}} are still exist. They have become dead and should be removed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties
Akira AJISAKA created HDFS-6263: --- Summary: Remove DRFA.MaxBackupIndex config from log4j.properties Key: HDFS-6263 URL: https://issues.apache.org/jira/browse/HDFS-6263 Project: Hadoop HDFS Issue Type: Test Affects Versions: 2.4.0 Reporter: Akira AJISAKA Priority: Minor HDFS-side of HADOOP-10525. {code} # uncomment the next line to limit number of backup files # log4j.appender.ROLLINGFILE.MaxBackupIndex=10 {code} In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the above lines should be removed because the appender (DRFA) doesn't support MaxBackupIndex config. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens
Akira AJISAKA created HDFS-6310: --- Summary: PBImageXmlWriter should output information about Delegation Tokens Key: HDFS-6310 URL: https://issues.apache.org/jira/browse/HDFS-6310 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.4.0 Reporter: Akira AJISAKA Separated from HDFS-6293. The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML option does not show any tokens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6400) Cannot execute hdfs oiv_legacy
Akira AJISAKA created HDFS-6400: --- Summary: Cannot execute hdfs oiv_legacy Key: HDFS-6400 URL: https://issues.apache.org/jira/browse/HDFS-6400 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.5.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Critical Attachments: HDFS-6400.patch HDFS-6293 added hdfs oiv_legacy command to view a legacy fsimage, but cannot execute the command. In {{hdfs}}, {code} elif [ COMMAND = oiv_legacy ] ; then CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer {code} should be {code} elif [ $COMMAND = oiv_legacy ] ; then CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6419) TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk
Akira AJISAKA created HDFS-6419: --- Summary: TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk Key: HDFS-6419 URL: https://issues.apache.org/jira/browse/HDFS-6419 Project: Hadoop HDFS Issue Type: Test Affects Versions: 2.5.0 Reporter: Akira AJISAKA TestBookKeerHACheckpoints#TestSBNCheckpoints fails on trunk. See https://builds.apache.org/job/PreCommit-HDFS-Build/6908//testReport/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6517) Update hadoop-metrics2.properties examples to Yarn
Akira AJISAKA created HDFS-6517: --- Summary: Update hadoop-metrics2.properties examples to Yarn Key: HDFS-6517 URL: https://issues.apache.org/jira/browse/HDFS-6517 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA HDFS-side of HADOOP-9919. HADOOP-9919 updated hadoop-metrics2.properties examples to YARN, however, the examples are still old because hadoop-metrics2.properties in HDFS project is actually packaged. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6519) Document oiv_legacy command
Akira AJISAKA created HDFS-6519: --- Summary: Document oiv_legacy command Key: HDFS-6519 URL: https://issues.apache.org/jira/browse/HDFS-6519 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Affects Versions: 2.5.0 Reporter: Akira AJISAKA HDFS-6293 introduced oiv_legacy command. The usage of the command should be included in OfflineImageViewer.apt.vm. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6550) Document MapReduce metrics
Akira AJISAKA created HDFS-6550: --- Summary: Document MapReduce metrics Key: HDFS-6550 URL: https://issues.apache.org/jira/browse/HDFS-6550 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Akira AJISAKA Assignee: Akira AJISAKA MapReduce-side of HADOOP-6350. Add MapReduce metrics to Metrics document. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6558) Missing '\n' in the description of dfsadmin -rollingUpgrade
Akira AJISAKA created HDFS-6558: --- Summary: Missing '\n' in the description of dfsadmin -rollingUpgrade Key: HDFS-6558 URL: https://issues.apache.org/jira/browse/HDFS-6558 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Akira AJISAKA Priority: Trivial In DFSAdmin.java, '\n' should be added at the end of the line {code} +prepare: prepare a new rolling upgrade. {code} to clean up the following help message. {code} $ hdfs dfsadmin -help rollingUpgrade -rollingUpgrade [query|prepare|finalize]: query: query the current rolling upgrade status. prepare: prepare a new rolling upgrade. finalize: finalize the current rolling upgrade. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6559) Wrong option dfsadmin -rollingUpgrade start is in the document
Akira AJISAKA created HDFS-6559: --- Summary: Wrong option dfsadmin -rollingUpgrade start is in the document Key: HDFS-6559 URL: https://issues.apache.org/jira/browse/HDFS-6559 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 2.4.0 Reporter: Akira AJISAKA Priority: Minor In HdfsRollingUpgrade.xml, {code} sourcehdfs dfsadmin -rollingUpgrade lt;query|start|finalizegt;/source {code} should be {code} sourcehdfs dfsadmin -rollingUpgrade lt;query|prepare|finalizegt;/source {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6571) NameNode should delete intermediate fsimage.ckpt when checkpoint fails
Akira AJISAKA created HDFS-6571: --- Summary: NameNode should delete intermediate fsimage.ckpt when checkpoint fails Key: HDFS-6571 URL: https://issues.apache.org/jira/browse/HDFS-6571 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.0 Reporter: Akira AJISAKA When checkpoint fails in getting a new fsimage from standby NameNode or SecondaryNameNode, intermediate fsimage (fsimage.ckpt_txid) is left and never to be cleaned up. If fsimage is large and fails to checkpoint many times, the growing intermediate fsimage may cause out of disk space. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-6654) Setting Extended ACLs recursively for another user belonging to the same group is not working
[ https://issues.apache.org/jira/browse/HDFS-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-6654. - Resolution: Not a Problem Closing this issue. [~andreina], please feel free to reopen this if you disagree. Setting Extended ACLs recursively for another user belonging to the same group is not working --- Key: HDFS-6654 URL: https://issues.apache.org/jira/browse/HDFS-6654 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.1 Reporter: J.Andreina {noformat} 1.Setting Extended ACL recursively for a user belonging to the same group is not working {noformat} Step 1: Created a Dir1 with User1 ./hdfs dfs -rm -R /Dir1 Step 2: Changed the permission (600) for Dir1 recursively ./hdfs dfs -chmod -R 600 /Dir1 Step 3: setfacls is executed to give read and write permissions to User2 which belongs to the same group as User1 ./hdfs dfs -setfacl -R -m user:User2:rw- /Dir1 ./hdfs dfs -getfacl -R /Dir1 No GC_PROFILE is given. Defaults to medium. # file: /Dir1 # owner: User1 # group: supergroup user::rw- user:User2:rw- group::--- mask::rw- other::--- Step 4: Now unable to write a File to Dir1 from User2 ./hdfs dfs -put hadoop /Dir1/1 No GC_PROFILE is given. Defaults to medium. put: Permission denied: user=User2, access=EXECUTE, inode=/Dir1:User1:supergroup:drw-- {noformat} 2. Fetching filesystem name , when one of the disk configured for NN dir becomes full returns a value null. {noformat} 2014-07-08 09:23:43,020 WARN org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space available on volume 'null' is 101060608, which is below the configured reserved amount 104857600 2014-07-08 09:23:43,020 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on available disk space. Already in safe mode. 2014-07-08 09:23:43,166 WARN org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space available on volume 'null' is 101060608, which is below the configured reserved amount 104857600 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block
Akira AJISAKA created HDFS-6682: --- Summary: Add a metric to expose the timestamp of the oldest under-replicated block Key: HDFS-6682 URL: https://issues.apache.org/jira/browse/HDFS-6682 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Akira AJISAKA In the following case, the data in the HDFS is lost and a client needs to put the same file again. # A Client puts a file to HDFS # A DataNode crashes before replicating a block of the file to other DataNodes I propose a metric to expose the timestamp of the oldest under-replicated/corrupt block. That way client can know what file to retain for the re-try. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document
Akira AJISAKA created HDFS-6704: --- Summary: Fix the command to launch JournalNode in HDFS-HA document Key: HDFS-6704 URL: https://issues.apache.org/jira/browse/HDFS-6704 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Minor In HDFSHighAvailabilityWithQJM.html, {code} After all of the necessary configuration options have been set, you must start the JournalNode daemons on the set of machines where they will run. This can be done by running the command hdfs-daemon.sh journalnode and waiting for the daemon to start on each of the relevant machines. {code} hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm
Akira AJISAKA created HDFS-6781: --- Summary: Separate HDFS commands from CommandsManual.apt.vm Key: HDFS-6781 URL: https://issues.apache.org/jira/browse/HDFS-6781 Project: Hadoop HDFS Issue Type: Bug Components: documentation Reporter: Akira AJISAKA Assignee: Akira AJISAKA HDFS-side of HADOOP-10899. The CommandsManual lists very old information about running HDFS subcommands from the 'hadoop' shell CLI. These are deprecated and should be removed. If necessary, the HDFS subcommands should be added to the HDFS documentation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6802) Some tests in TestDFSClientFailover are missing @Test annotation
Akira AJISAKA created HDFS-6802: --- Summary: Some tests in TestDFSClientFailover are missing @Test annotation Key: HDFS-6802 URL: https://issues.apache.org/jira/browse/HDFS-6802 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.5.0 Reporter: Akira AJISAKA HDFS-6334 added new tests in TestDFSClientFailover but they are not executed by Junit framework because they don't have {{@Test}} annotation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6806) Rolling upgrades document should mention the version available
Akira AJISAKA created HDFS-6806: --- Summary: Rolling upgrades document should mention the version available Key: HDFS-6806 URL: https://issues.apache.org/jira/browse/HDFS-6806 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Affects Versions: 2.4.0 Reporter: Akira AJISAKA Priority: Minor We should document that rolling upgrades do not support upgrades from ~2.3 to 2.4+. It has been asked in the user ML many times. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6832) Fix the usage of 'hdfs namenode' command
Akira AJISAKA created HDFS-6832: --- Summary: Fix the usage of 'hdfs namenode' command Key: HDFS-6832 URL: https://issues.apache.org/jira/browse/HDFS-6832 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.1 Reporter: Akira AJISAKA Priority: Minor {code} [root@trunk ~]# hdfs namenode -help Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | [-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | [-rollback] | [-rollingUpgrade downgrade|rollback ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] {code} There're some issues in the usage to be fixed. # Usage: java NameNode should be Usage: hdfs namenode # -rollingUpgrade started option should be added # The last ']' should be removed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HDFS-3655) Datanode recoverRbw could hang sometime
[ https://issues.apache.org/jira/browse/HDFS-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-3655. - Resolution: Duplicate Assignee: (was: Xiaobo Peng) Target Version/s: (was: 0.22.1) Closing this issue as duplicate. Please feel free to reopen if you disagree. Datanode recoverRbw could hang sometime --- Key: HDFS-3655 URL: https://issues.apache.org/jira/browse/HDFS-3655 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 0.22.0, 1.0.3, 2.0.0-alpha Reporter: Ming Ma Attachments: HDFS-3655-0.22-use-join-instead-of-wait.patch, HDFS-3655-0.22.patch This bug seems to apply to 0.22 and hadoop 2.0. I will upload the initial fix done by my colleague Xiaobo Peng shortly ( there is some logistics issue being worked on so that he can upload patch himself later ). recoverRbw try to kill the old writer thread, but it took the lock (FSDataset monitor object) which the old writer thread is waiting on ( for example the call to data.getTmpInputStreams ). DataXceiver for client /10.110.3.43:40193 [Receiving block blk_-3037542385914640638_57111747 client=DFSClient_attempt_201206021424_0001_m_000401_0] daemon prio=10 tid=0x7facf8111800 nid=0x6b64 in Object.wait() [0x7facd1ddb000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1186) ■locked 0x0007856c1200 (a org.apache.hadoop.util.Daemon) at java.lang.Thread.join(Thread.java:1239) at org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:158) at org.apache.hadoop.hdfs.server.datanode.FSDataset.recoverRbw(FSDataset.java:1347) ■locked 0x0007838398c0 (a org.apache.hadoop.hdfs.server.datanode.FSDataset) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:119) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlockInternal(DataXceiver.java:391) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:327) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:405) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:344) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:183) at java.lang.Thread.run(Thread.java:662) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks
Akira AJISAKA created HDFS-6945: --- Summary: ExcessBlocks metric may not be decremented if there are no over replicated blocks Key: HDFS-6945 URL: https://issues.apache.org/jira/browse/HDFS-6945 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Akira AJISAKA I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, however, there are no over-replicated blocks (confirmed by fsck). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk
Akira AJISAKA created HDFS-6980: --- Summary: TestWebHdfsFileSystemContract fails in trunk Key: HDFS-6980 URL: https://issues.apache.org/jira/browse/HDFS-6980 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Akira AJISAKA Many tests in TestWebHdfsFileSystemContract fail by too many open files error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7002) Failed to rolling upgrade hdfs from 2.2.0 to 2.4.1
[ https://issues.apache.org/jira/browse/HDFS-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-7002. - Resolution: Invalid Rolling upgrades are available for the upgrades from 2.4+ only. Rolling upgrade from ~2.3 to 2.4+ is not supported. Failed to rolling upgrade hdfs from 2.2.0 to 2.4.1 -- Key: HDFS-7002 URL: https://issues.apache.org/jira/browse/HDFS-7002 Project: Hadoop HDFS Issue Type: Bug Components: journal-node, namenode, qjm Affects Versions: 2.2.0, 2.4.1 Reporter: sam liu Priority: Blocker -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7116) Add a command to get the bandwidth of balancer
Akira AJISAKA created HDFS-7116: --- Summary: Add a command to get the bandwidth of balancer Key: HDFS-7116 URL: https://issues.apache.org/jira/browse/HDFS-7116 Project: Hadoop HDFS Issue Type: New Feature Components: balancer Reporter: Akira AJISAKA Now reading logs is the only way to check how the balancer bandwidth is set. It would be useful for administrators if they can get the parameter via CLI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-2247) Provide a -y option that skips the confirmation question of the namenode for use in scripts.
[ https://issues.apache.org/jira/browse/HDFS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-2247. - Resolution: Duplicate Now -force option can be used. Provide a -y option that skips the confirmation question of the namenode for use in scripts. -- Key: HDFS-2247 URL: https://issues.apache.org/jira/browse/HDFS-2247 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Mathias Gug Labels: newbie As suggested in [HDFS 718|https://issues.apache.org/jira/browse/HDFS-718?focusedCommentId=12776315page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12776315] having an option to skip the confirmation question when formating a NameNode would prove to be very useful when packaging hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-4014) Fix warnings found by findbugs2
[ https://issues.apache.org/jira/browse/HDFS-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-4014. - Resolution: Fixed Fix Version/s: 2.0.3-alpha Closing this issue since all of the sub-tasks were completed. Fix warnings found by findbugs2 Key: HDFS-4014 URL: https://issues.apache.org/jira/browse/HDFS-4014 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Fix For: 2.0.3-alpha Attachments: findbugs.out.24.html, findbugs.out.25.html, findbugs.out.26.html The HDFS side of HADOOP-8594. Ubrella jira for fixing the warnings found by findbugs 2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7750) Fix findbugs warnings in hdfs-bkjournal module
Akira AJISAKA created HDFS-7750: --- Summary: Fix findbugs warnings in hdfs-bkjournal module Key: HDFS-7750 URL: https://issues.apache.org/jira/browse/HDFS-7750 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Akira AJISAKA There are 3 findbugs warnings in hdfs-bkjournal module. We should fix them. {code} Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 386] Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 524] Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 733] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7750) Fix findbugs warnings in hdfs-bkjournal module
[ https://issues.apache.org/jira/browse/HDFS-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-7750. - Resolution: Duplicate Fix findbugs warnings in hdfs-bkjournal module -- Key: HDFS-7750 URL: https://issues.apache.org/jira/browse/HDFS-7750 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Akira AJISAKA Assignee: Rakesh R Labels: newbie There are 3 findbugs warnings in hdfs-bkjournal module. We should fix them. {code} Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 386] Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 524] Found reliance on default encoding: String.getBytes() At BookKeeperJournalManager.java:[line 733] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-2628) Remove Mapred filenames from HDFS findbugsExcludeFile.xml file
[ https://issues.apache.org/jira/browse/HDFS-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-2628. - Resolution: Duplicate This issue was fixed by HDFS-6025. Closing. Remove Mapred filenames from HDFS findbugsExcludeFile.xml file -- Key: HDFS-2628 URL: https://issues.apache.org/jira/browse/HDFS-2628 Project: Hadoop HDFS Issue Type: Improvement Components: test Reporter: Uma Maheswara Rao G Priority: Minor Mapreduce filesnames are there in hadoop-hdfs-project\hadoop-hdfs\dev-support\findbugsExcludeFile.xml is it intentional? i think we should remove them from HDFS. Exampl: {code} !-- Ignore warnings where child class has the same name as super class. Classes based on Old API shadow names from new API. Should go off after HADOOP-1.0 -- Match Class name=~org.apache.hadoop.mapred.* / Bug pattern=NM_SAME_SIMPLE_NAME_AS_SUPERCLASS / /Match {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7732) Fix the order of the parameters in DFSConfigKeys
Akira AJISAKA created HDFS-7732: --- Summary: Fix the order of the parameters in DFSConfigKeys Key: HDFS-7732 URL: https://issues.apache.org/jira/browse/HDFS-7732 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.6.0 Reporter: Akira AJISAKA Priority: Trivial In DFSConfigKeys.java, there are some parameters between {{DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY}} and {{DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT}}. {code} public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = dfs.client.read.shortcircuit.buffer.size; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_KEY = dfs.client.read.shortcircuit.streams.cache.size; public static final int DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_DEFAULT = 256; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_KEY = dfs.client.read.shortcircuit.streams.cache.expiry.ms; public static final long DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_DEFAULT = 5 * 60 * 1000; public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 1024 * 1024; {code} The order should be corrected as {code} public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = dfs.client.read.shortcircuit.buffer.size; public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 1024 * 1024; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_KEY = dfs.client.read.shortcircuit.streams.cache.size; public static final int DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_DEFAULT = 256; public static final String DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_KEY = dfs.client.read.shortcircuit.streams.cache.expiry.ms; public static final long DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_DEFAULT = 5 * 60 * 1000; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-6571) NameNode should delete intermediate fsimage.ckpt when checkpoint fails
[ https://issues.apache.org/jira/browse/HDFS-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-6571. - Resolution: Duplicate NameNode should delete intermediate fsimage.ckpt when checkpoint fails -- Key: HDFS-6571 URL: https://issues.apache.org/jira/browse/HDFS-6571 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Charles Lamb When checkpoint fails in getting a new fsimage from standby NameNode or SecondaryNameNode, intermediate fsimage (fsimage.ckpt_txid) is left and never to be cleaned up. If fsimage is large and fails to checkpoint many times, the growing intermediate fsimage may cause out of disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7754) Fix findbugs warning produced by HDFS-7710
Akira AJISAKA created HDFS-7754: --- Summary: Fix findbugs warning produced by HDFS-7710 Key: HDFS-7754 URL: https://issues.apache.org/jira/browse/HDFS-7754 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira AJISAKA There is a findbugs warning produced by HDFS-7710. https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7812) Remove httpclient dependency from hadoop-hdfs
Akira AJISAKA created HDFS-7812: --- Summary: Remove httpclient dependency from hadoop-hdfs Key: HDFS-7812 URL: https://issues.apache.org/jira/browse/HDFS-7812 Project: Hadoop HDFS Issue Type: Task Reporter: Akira AJISAKA Priority: Trivial Sub-task of HADOOP-10105. Remove unused import in TestWebHDFSTokens.java. {code} import org.apache.commons.httpclient.HttpConnection; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-5518) HDFS doesn't compile/run against Guava 1.5
[ https://issues.apache.org/jira/browse/HDFS-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5518. - Resolution: Duplicate Target Version/s: (was: 3.0.0) Fixed by HADOOP-11600. Now Hadoop source code can be compiled with Guava 17. HDFS doesn't compile/run against Guava 1.5 -- Key: HDFS-5518 URL: https://issues.apache.org/jira/browse/HDFS-5518 Project: Hadoop HDFS Issue Type: Bug Components: journal-node, test Affects Versions: 2.2.0 Reporter: Steve Loughran Assignee: Vinayakumar B Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, HADOOP-10101.patch HADOOP-10101 updates hadoop project to using the latest version of google guava, so reduce conflict with other projects (including bookkeeper). Two classes in HDFS don't compile, as google removed some classes # NullableOutputStream gone: switch to using Hadoop's own {{NullableOutputStream}} # {{Ranges}} class gone: switch to {{Range}} class -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7881) TestHftpFileSystem#testSeek fails in branch-2
Akira AJISAKA created HDFS-7881: --- Summary: TestHftpFileSystem#testSeek fails in branch-2 Key: HDFS-7881 URL: https://issues.apache.org/jira/browse/HDFS-7881 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Akira AJISAKA Priority: Blocker TestHftpFileSystem#testSeek fails in branch-2. {code} --- T E S T S --- Running org.apache.hadoop.hdfs.web.TestHftpFileSystem Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.201 sec FAILURE! - in org.apache.hadoop.hdfs.web.TestHftpFileSystem testSeek(org.apache.hadoop.hdfs.web.TestHftpFileSystem) Time elapsed: 0.054 sec ERROR! java.io.IOException: Content-Length is missing: {null=[HTTP/1.1 206 Partial Content], Date=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 GMT], Expires=[Wed, 04 Mar 2015 05:32:30 GMT, Wed, 04 Mar 2015 05:32:30 GMT], Connection=[close], Content-Type=[text/plain; charset=utf-8], Server=[Jetty(6.1.26)], Content-Range=[bytes 7-9/10], Pragma=[no-cache, no-cache], Cache-Control=[no-cache]} at org.apache.hadoop.hdfs.web.ByteRangeInputStream.openInputStream(ByteRangeInputStream.java:132) at org.apache.hadoop.hdfs.web.ByteRangeInputStream.getInputStream(ByteRangeInputStream.java:104) at org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:181) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.web.TestHftpFileSystem.testSeek(TestHftpFileSystem.java:253) Results : Tests in error: TestHftpFileSystem.testSeek:253 » IO Content-Length is missing: {null=[HTTP/1 Tests run: 14, Failures: 0, Errors: 1, Skipped: 0 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7880) Remove the tests for legacy Web UI in branch-2
Akira AJISAKA created HDFS-7880: --- Summary: Remove the tests for legacy Web UI in branch-2 Key: HDFS-7880 URL: https://issues.apache.org/jira/browse/HDFS-7880 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.7.0 Reporter: Akira AJISAKA Priority: Blocker These tests fails in branch-2 because the test assert that legacy UI exists. * TestJournalNode.testHttpServer:174 expected:200 but was:404 * TestNNWithQJM.testWebPageHasQjmInfo:229 expected:200 but was:404 * TestHAWebUI.testLinkAndClusterSummary:50 expected:200 but was:404 * TestHostsFiles.testHostsExcludeDfshealthJsp:130 expected:200 but was:404 * TestSecondaryWebUi.testSecondaryWebUiJsp:87 expected:200 but was:404 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7708) Balancer should delete its pid file when it completes rebalance
Akira AJISAKA created HDFS-7708: --- Summary: Balancer should delete its pid file when it completes rebalance Key: HDFS-7708 URL: https://issues.apache.org/jira/browse/HDFS-7708 Project: Hadoop HDFS Issue Type: Bug Components: balancer mover Affects Versions: 2.6.0 Reporter: Akira AJISAKA When balancer completes rebalance and exits, it does not delete its pid file. Starting balancer again, then kill -0 pid to confirm the process is running. The problem is: * If another process is running as the same pid as `cat pidfile`, balancer fails to start with following message: {code} balancer is running as process 3443. Stop it first. {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8350) Remove old webhdfs.xml
Akira AJISAKA created HDFS-8350: --- Summary: Remove old webhdfs.xml Key: HDFS-8350 URL: https://issues.apache.org/jira/browse/HDFS-8350 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Affects Versions: 2.7.0 Reporter: Akira AJISAKA Priority: Minor Old style document hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml is no longer maintenanced and WebHDFS.md is used instead. We can remove webhdfs.xml. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8351) Remove namenode -finalize option from document
Akira AJISAKA created HDFS-8351: --- Summary: Remove namenode -finalize option from document Key: HDFS-8351 URL: https://issues.apache.org/jira/browse/HDFS-8351 Project: Hadoop HDFS Issue Type: Bug Components: documentation Affects Versions: 2.7.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA hdfs namenode -finalize option was removed by HDFS-5138, however, the document was not updated. http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HDFS-8400) Fix failed TestHdfsConfigFields
[ https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reopened HDFS-8400: - Fix failed TestHdfsConfigFields --- Key: HDFS-8400 URL: https://issues.apache.org/jira/browse/HDFS-8400 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Liu Shaohui Assignee: Liu Shaohui Attachments: HDFS-8400.001.patch TestHdfsConfigFields failed for: {code} hdfs-default.xml has 2 properties missing in class org.apache.hadoop.hdfs.DFSConfigKeys dfs.htrace.spanreceiver.classes dfs.client.htrace.spanreceiver.classes {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-8400) Fix failed TestHdfsConfigFields
[ https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-8400. - Resolution: Duplicate Fix failed TestHdfsConfigFields --- Key: HDFS-8400 URL: https://issues.apache.org/jira/browse/HDFS-8400 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Liu Shaohui Assignee: Liu Shaohui Attachments: HDFS-8400.001.patch TestHdfsConfigFields failed for: {code} hdfs-default.xml has 2 properties missing in class org.apache.hadoop.hdfs.DFSConfigKeys dfs.htrace.spanreceiver.classes dfs.client.htrace.spanreceiver.classes {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8149) The footer of the Web UI Hadoop, 2014 is old
Akira AJISAKA created HDFS-8149: --- Summary: The footer of the Web UI Hadoop, 2014 is old Key: HDFS-8149 URL: https://issues.apache.org/jira/browse/HDFS-8149 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Akira AJISAKA Need to be updated to 2015. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8443) Document dfs.namenode.service.handler.count in hdfs-site.xml
Akira AJISAKA created HDFS-8443: --- Summary: Document dfs.namenode.service.handler.count in hdfs-site.xml Key: HDFS-8443 URL: https://issues.apache.org/jira/browse/HDFS-8443 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Akira AJISAKA When dfs.namenode.servicerpc-address is configured, NameNode launches an extra RPC server to handle requests from non-client nodes. dfs.namenode.service.handler.count specifies the number of threads for the server but the parameter is not documented anywhere. I found a mail for asking about the parameter. http://mail-archives.apache.org/mod_mbox/hadoop-user/201505.mbox/%3CE0D5A619-BDEA-44D2-81EB-C32B8464133D%40gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-5863) Improve OfflineImageViewer
[ https://issues.apache.org/jira/browse/HDFS-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-5863. - Resolution: Fixed Improve OfflineImageViewer -- Key: HDFS-5863 URL: https://issues.apache.org/jira/browse/HDFS-5863 Project: Hadoop HDFS Issue Type: Improvement Components: tools Reporter: Akira AJISAKA This is an umbrella jira for improving Offline Image Viewer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8615) Correct HTTP method in WebHDFS document
Akira AJISAKA created HDFS-8615: --- Summary: Correct HTTP method in WebHDFS document Key: HDFS-8615 URL: https://issues.apache.org/jira/browse/HDFS-8615 Project: Hadoop HDFS Issue Type: Bug Components: documentation Reporter: Akira AJISAKA For example, {{-X PUT}} should be removed from the following curl command. {code:title=WebHDFS.md} ### Get ACL Status * Submit a HTTP GET request. curl -i -X PUT http://HOST:PORT/webhdfs/v1/PATH?op=GETACLSTATUS {code} Other than this example, there are several commands which {{-X PUT}} should be removed from. We should fix them all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-8459) Question: Why Namenode doesn't judge the status of replicas when convert block status from commited to complete?
[ https://issues.apache.org/jira/browse/HDFS-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-8459. - Resolution: Invalid Apache JIRA is for reporting bugs or filing proposed enhancement or features, not for end-user question. I recommend you to e-mail to u...@hadoop.apache.org with this question. Question: Why Namenode doesn't judge the status of replicas when convert block status from commited to complete? - Key: HDFS-8459 URL: https://issues.apache.org/jira/browse/HDFS-8459 Project: Hadoop HDFS Issue Type: Improvement Reporter: cuiyang Why Namenode doesn't judge the status of replicas when convert block status from commited to complete? When client finished write block and call namenode::complete(), namenode do things as follow (in BlockManager::commitOrCompleteLastBlock): final boolean b = commitBlock((BlockInfoUnderConstruction)lastBlock, commitBlock); if(countNodes(lastBlock).liveReplicas() = minReplication) completeBlock(bc, bc.numBlocks()-1, false); return b; But the NameNode doesn't care how many replicas which status is finalized this block has! It should be this: if there is no one replica which status is not finalized, the block should not convert to complete status! Because According to the appendDesign3.pdf (https://issues.apache.org/jira/secure/attachment/12445209/appendDesign3.pdf): Complete: A complete block is a block whose length and GS are finalized and NameNode has seen a GS/len matched finalized replica of the block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer
Akira AJISAKA created HDFS-8462: --- Summary: Implement GETXATTRS and LISTXATTRS operation for WebImageViewer Key: HDFS-8462 URL: https://issues.apache.org/jira/browse/HDFS-8462 Project: Hadoop HDFS Issue Type: New Feature Reporter: Akira AJISAKA In Hadoop 2.7.0, WebImageViewer supports the following operations: * {{GETFILESTATUS}} * {{LISTSTATUS}} * {{GETACLSTATUS}} I'm thinking it would be better for administrators if {{GETXATTRS}} and {{LISTXATTRS}} are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8944) Make dfsadmin command option case insensitive
Akira AJISAKA created HDFS-8944: --- Summary: Make dfsadmin command option case insensitive Key: HDFS-8944 URL: https://issues.apache.org/jira/browse/HDFS-8944 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Priority: Minor Now dfsadmin command options are case sensitive except allowSnapshot and disallowSnapshot. It would be better to make them case insensitive for usability and consistency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8929) Add a metric to expose the timestamp of the last journal
Akira AJISAKA created HDFS-8929: --- Summary: Add a metric to expose the timestamp of the last journal Key: HDFS-8929 URL: https://issues.apache.org/jira/browse/HDFS-8929 Project: Hadoop HDFS Issue Type: New Feature Components: journal-node Reporter: Akira AJISAKA If there are three JNs and only one JN is failing to journal, we can detect it by monitoring the difference of the last written transaction id among JNs from NN WebUI or JN metrics. However, it's difficult to define the threshold to alert because the increase rate of the number of transaction depends on how busy the cluster is. Therefore I'd like to propose a metric to expose the timestamp of the last journal. That way we can easily alert if a JN is failing to journal for some fixed period. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8844) TestHDFSCLI does not cleanup the test directory
Akira AJISAKA created HDFS-8844: --- Summary: TestHDFSCLI does not cleanup the test directory Key: HDFS-8844 URL: https://issues.apache.org/jira/browse/HDFS-8844 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Akira AJISAKA Priority: Minor If TestHDFSCLI is executed twice without {{mvn clean}}, the second try fails. Here are the failing test cases: {noformat} 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(231)) - Failing tests: 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(232)) - -- 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 226: get: getting non existent(absolute path) 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 227: get: getting non existent file(relative path) 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 228: get: Test for hdfs:// path - getting non existent 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 229: get: Test for Namenode's path - getting non existent 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 250: copyToLocal: non existent relative path 2015-07-31 21:35:17,654 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 251: copyToLocal: non existent absolute path 2015-07-31 21:35:17,655 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 252: copyToLocal: Test for hdfs:// path - non existent file/directory 2015-07-31 21:35:17,655 [main] INFO cli.CLITestHelper (CLITestHelper.java:displayResults(238)) - 253: copyToLocal: Test for Namenode's path - non existent file/directory {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block
[ https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reopened HDFS-6682: - Reverted this patch from trunk and branch-2. Add a metric to expose the timestamp of the oldest under-replicated block - Key: HDFS-6682 URL: https://issues.apache.org/jira/browse/HDFS-6682 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Akira AJISAKA Labels: metrics Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch In the following case, the data in the HDFS is lost and a client needs to put the same file again. # A Client puts a file to HDFS # A DataNode crashes before replicating a block of the file to other DataNodes I propose a metric to expose the timestamp of the oldest under-replicated/corrupt block. That way client can know what file to retain for the re-try. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8858) DU should be re-executed if the target directory exists
Akira AJISAKA created HDFS-8858: --- Summary: DU should be re-executed if the target directory exists Key: HDFS-8858 URL: https://issues.apache.org/jira/browse/HDFS-8858 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Akira AJISAKA Priority: Minor Unix DU command rarely fails when a child file/directory of the target path is being moved or deleted. I'm thinking we should re-try DU if the target path exists, to avoid failure in writing replicas. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8812) TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails
Akira AJISAKA created HDFS-8812: --- Summary: TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails Key: HDFS-8812 URL: https://issues.apache.org/jira/browse/HDFS-8812 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Akira AJISAKA TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails. {noformat} Running org.apache.hadoop.hdfs.TestDistributedFileSystem Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 50.038 sec FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem testDFSClientPeerWriteTimeout(org.apache.hadoop.hdfs.TestDistributedFileSystem) Time elapsed: 0.66 sec FAILURE! java.lang.AssertionError: wrong exception:java.lang.AssertionError: write should timeout at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1206) {noformat} See https://builds.apache.org/job/PreCommit-HDFS-Build/11783/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/ and https://builds.apache.org/job/PreCommit-HDFS-Build/11786/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-8616) Cherry pick HDFS-6495 for excess block leak
[ https://issues.apache.org/jira/browse/HDFS-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-8616. - Resolution: Done I've backported HDFS-6945 to 2.7.2. Please reopen this issue if you disagree. Cherry pick HDFS-6495 for excess block leak --- Key: HDFS-8616 URL: https://issues.apache.org/jira/browse/HDFS-8616 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.0.0-alpha Reporter: Daryn Sharp Assignee: Akira AJISAKA Busy clusters quickly leak tens or hundreds of thousands of excess blocks which slow BR processing. HDFS-6495 should be cherry picked into 2.7.x. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8749) Fix findbugs warning in BlockManager.java
Akira AJISAKA created HDFS-8749: --- Summary: Fix findbugs warning in BlockManager.java Key: HDFS-8749 URL: https://issues.apache.org/jira/browse/HDFS-8749 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira AJISAKA Priority: Minor {code:title=BlockManager#checkBlocksProperlyReplicated} final BlockInfoUnderConstruction uc = (BlockInfoUnderConstruction)b; {code} {{uc}} is not needed and this causes findbugs warning. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-8743) Update document for hdfs fetchdt
[ https://issues.apache.org/jira/browse/HDFS-8743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-8743. - Resolution: Duplicate Update document for hdfs fetchdt Key: HDFS-8743 URL: https://issues.apache.org/jira/browse/HDFS-8743 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Now hdfs fetchdt command accepts the following options: * --webservice * --renewer * --cancel * --renew * --print However, only --webservice option is documented. http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8743) Update document for hdfs fetchdt
Akira AJISAKA created HDFS-8743: --- Summary: Update document for hdfs fetchdt Key: HDFS-8743 URL: https://issues.apache.org/jira/browse/HDFS-8743 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Akira AJISAKA Now hdfs fetchdt command accepts the following options: * --webservice * --renewer * --cancel * --renew * --print However, only --webservice option is documented. http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#fetchdt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9024) Deprecate TotalFiles metric
Akira AJISAKA created HDFS-9024: --- Summary: Deprecate TotalFiles metric Key: HDFS-9024 URL: https://issues.apache.org/jira/browse/HDFS-9024 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Akira AJISAKA There are two metrics (TotalFiles and FilesTotal) which are the same. In HDFS-5165, we decided to remove TotalFiles but we need to deprecate the metric before removing it. This issue is to deprecate the metric. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-9173) Erasure Coding: Lease recovery for striped file
[ https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA resolved HDFS-9173. - Resolution: Fixed License issue was fixed by HDFS-9582. Closing. > Erasure Coding: Lease recovery for striped file > --- > > Key: HDFS-9173 > URL: https://issues.apache.org/jira/browse/HDFS-9173 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Walter Su >Assignee: Walter Su > Fix For: 3.0.0 > > Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch, > HDFS-9173.02.step125.patch, HDFS-9173.03.patch, HDFS-9173.04.patch, > HDFS-9173.05.patch, HDFS-9173.06.patch, HDFS-9173.07.patch, > HDFS-9173.08.patch, HDFS-9173.09.patch, HDFS-9173.09.patch, HDFS-9173.10.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HDFS-9173) Erasure Coding: Lease recovery for striped file
[ https://issues.apache.org/jira/browse/HDFS-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reopened HDFS-9173: - > Erasure Coding: Lease recovery for striped file > --- > > Key: HDFS-9173 > URL: https://issues.apache.org/jira/browse/HDFS-9173 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Walter Su >Assignee: Walter Su > Fix For: 3.0.0 > > Attachments: HDFS-9173.00.wip.patch, HDFS-9173.01.patch, > HDFS-9173.02.step125.patch, HDFS-9173.03.patch, HDFS-9173.04.patch, > HDFS-9173.05.patch, HDFS-9173.06.patch, HDFS-9173.07.patch, > HDFS-9173.08.patch, HDFS-9173.09.patch, HDFS-9173.09.patch, HDFS-9173.10.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-10485) Fix findbugs warning in FSEditLog.java in branch-2
Akira AJISAKA created HDFS-10485: Summary: Fix findbugs warning in FSEditLog.java in branch-2 Key: HDFS-10485 URL: https://issues.apache.org/jira/browse/HDFS-10485 Project: Hadoop HDFS Issue Type: Bug Reporter: Akira AJISAKA Found 1 findbugs warning when creating a patch for branch-2 in HDFS-10341 (https://builds.apache.org/job/PreCommit-HDFS-Build/15639/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) {noformat} Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.FSEditLog.numTransactionsBatchedInSync; locked 50% of time Bug type IS2_INCONSISTENT_SYNC (click for details) In class org.apache.hadoop.hdfs.server.namenode.FSEditLog Field org.apache.hadoop.hdfs.server.namenode.FSEditLog.numTransactionsBatchedInSync Synchronized 50% of the time Unsynchronized access at FSEditLog.java:[line 676] Unsynchronized access at FSEditLog.java:[line 676] Synchronized access at FSEditLog.java:[line 1254] Synchronized access at FSEditLog.java:[line 716] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org