[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092786#comment-15092786 ] Hudson commented on HDFS-9621: -- FAILURE: Integrated in Hadoop-trunk-Commit #9085 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9085/]) HDFS-9621. getListing wrongly associates Erasure Coding policy to (jing9: rev 9f4bf3bdf9e74800643477cfb18361e01cf6859c) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.
[ https://issues.apache.org/jira/browse/HDFS-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-9584: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Thanks [~surendrasingh] for the contribution and all for the reviews. I've commit the change to trunk, branch-2 and branch-2.8. > NPE in distcp when ssl configuration file does not exist in class path. > --- > > Key: HDFS-9584 > URL: https://issues.apache.org/jira/browse/HDFS-9584 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Labels: supportability > Fix For: 2.8.0 > > Attachments: HDFS-9584.001.patch, HDFS-9584.patch, HDFS-9584.patch > > > {noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml > hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat} > if {{ssl-distcp.xml}} file not exist in class path, distcp will throw > NullPointerException. > {code} > java.lang.NullPointerException > at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266) > at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:431) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.
[ https://issues.apache.org/jira/browse/HDFS-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093088#comment-15093088 ] Wei-Chiu Chuang commented on HDFS-9584: --- Thanks [~xyao] for the commit. I noticed that the commit message is a bit misleading: "HDFS-8584. NPE in distcp when ssl configuration file does not exist in class path. Contributed by Surendra Singh Lilhore." It should be HDFS-9584 instead. > NPE in distcp when ssl configuration file does not exist in class path. > --- > > Key: HDFS-9584 > URL: https://issues.apache.org/jira/browse/HDFS-9584 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Labels: supportability > Fix For: 2.8.0 > > Attachments: HDFS-9584.001.patch, HDFS-9584.patch, HDFS-9584.patch > > > {noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml > hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat} > if {{ssl-distcp.xml}} file not exist in class path, distcp will throw > NullPointerException. > {code} > java.lang.NullPointerException > at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266) > at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:431) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details
[ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093108#comment-15093108 ] Hadoop QA commented on HDFS-9634: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-9634 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781698/HDFS-9634.001.patch | | JIRA Issue | HDFS-9634 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14096/console | This message was automatically generated. > webhdfs client side exceptions don't provide enough details > --- > > Key: HDFS-9634 > URL: https://issues.apache.org/jira/browse/HDFS-9634 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0, 2.8.0, 2.7.1 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: HDFS-9634.001.patch > > > When a WebHDFS client side exception (for example, read timeout) occurs there > are no details beyond the fact that a timeout occurred. Ideally it should say > which node is responsible for the timeout, but failing that it should at > least say which node we're talking to so we can examine that node's logs to > further investigate. > {noformat} > java.net.SocketTimeoutException: Read timed out > at java.net.SocketInputStream.socketRead0(Native Method) > at java.net.SocketInputStream.read(SocketInputStream.java:150) > at java.net.SocketInputStream.read(SocketInputStream.java:121) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.net.www.MeteredStream.read(MeteredStream.java:134) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at > sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035) > at > org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121) > at > org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at > com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58) > at java.io.FilterInputStream.read(FilterInputStream.java:107) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387) > ... 12 more > {noformat} > There are no clues as to which datanode we're talking to nor which datanode > was responsible for the timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9621: I've committed this to trunk. Thanks Nicholas, Zhe, and Kai for the review! > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9588: --- Attachment: HDFS-9588-HDFS-1312.004.patch [~arpitagarwal] Thanks for the comments. This patch fixes all 3 issues mentioned by you. > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch, > HDFS-9588-HDFS-1312.004.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9629: Status: Patch Available (was: Open) > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8562) HDFS Performance is impacted by FileInputStream Finalizer
[ https://issues.apache.org/jira/browse/HDFS-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093017#comment-15093017 ] Kai Zheng commented on HDFS-8562: - Hi Yanping, bq. Of course, it is just in theory, not practical, as it will likely re-design entire HDFS. No, it won't be like that. Either way would only bring limited impact to HDFS in restricted scopes. bq. there is really no need to fix FileInputStream and FileOutputStream as new code can be written using above new functions, right? I thought Colin had a comment above that tells even we have the perfect API and solution for the issue you reported here, we'll still need a fix like the patch attached for previous JDK versions. > HDFS Performance is impacted by FileInputStream Finalizer > - > > Key: HDFS-8562 > URL: https://issues.apache.org/jira/browse/HDFS-8562 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, performance >Affects Versions: 2.5.0 > Environment: Impact any application that uses HDFS >Reporter: Yanping Wang > Attachments: HDFS-8562.002b.patch, HDFS-8562.003a.patch, > HDFS-8562.003b.patch, HDFS-8562.004a.patch, HDFS-8562.004b.patch, > HDFS-8562.01.patch > > > While running HBase using HDFS as datanodes, we noticed excessive high GC > pause spikes. For example with jdk8 update 40 and G1 collector, we saw > datanode GC pauses spiked toward 160 milliseconds while they should be around > 20 milliseconds. > We tracked down to GC logs and found those long GC pauses were devoted to > process high number of final references. > For example, this Young GC: > 2715.501: [GC pause (G1 Evacuation Pause) (young) 0.1529017 secs] > 2715.572: [SoftReference, 0 refs, 0.0001034 secs] > 2715.572: [WeakReference, 0 refs, 0.123 secs] > 2715.572: [FinalReference, 8292 refs, 0.0748194 secs] > 2715.647: [PhantomReference, 0 refs, 160 refs, 0.0001333 secs] > 2715.647: [JNI Weak Reference, 0.140 secs] > [Ref Proc: 122.3 ms] > [Eden: 910.0M(910.0M)->0.0B(911.0M) Survivors: 11.0M->10.0M Heap: > 951.1M(1536.0M)->40.2M(1536.0M)] > [Times: user=0.47 sys=0.01, real=0.15 secs] > This young GC took 152.9 milliseconds STW pause, while spent 122.3 > milliseconds in Ref Proc, which processed 8292 FinalReference in 74.8 > milliseconds plus some overhead. > We used JFR and JMAP with Memory Analyzer to track down and found those > FinalReference were all from FileInputStream. We checked HDFS code and saw > the use of the FileInputStream in datanode: > https://apache.googlesource.com/hadoop-common/+/refs/heads/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java > {code} > 1.public static MappableBlock load(long length, > 2.FileInputStream blockIn, FileInputStream metaIn, > 3.String blockFileName) throws IOException { > 4.MappableBlock mappableBlock = null; > 5.MappedByteBuffer mmap = null; > 6.FileChannel blockChannel = null; > 7.try { > 8.blockChannel = blockIn.getChannel(); > 9.if (blockChannel == null) { > 10. throw new IOException("Block InputStream has no FileChannel."); > 11. } > 12. mmap = blockChannel.map(MapMode.READ_ONLY, 0, length); > 13. NativeIO.POSIX.getCacheManipulator().mlock(blockFileName, mmap, length); > 14. verifyChecksum(length, metaIn, blockChannel, blockFileName); > 15. mappableBlock = new MappableBlock(mmap, length); > 16. } finally { > 17. IOUtils.closeQuietly(blockChannel); > 18. if (mappableBlock == null) { > 19. if (mmap != null) { > 20. NativeIO.POSIX.munmap(mmap); // unmapping also unlocks > 21. } > 22. } > 23. } > 24. return mappableBlock; > 25. } > {code} > We looked up > https://docs.oracle.com/javase/7/docs/api/java/io/FileInputStream.html and > http://hg.openjdk.java.net/jdk7/jdk7/jdk/file/23bdcede4e39/src/share/classes/java/io/FileInputStream.java > and noticed FileInputStream relies on the Finalizer to release its resource. > When a class that has a finalizer created, an entry for that class instance > is put on a queue in the JVM so the JVM knows it has a finalizer that needs > to be executed. > The current issue is: even with programmers do call close() after using > FileInputStream, its finalize() method will still be called. In other words, > still get the side effect of the FinalReference being registered at > FileInputStream allocation time, and also reference processing to reclaim the > FinalReference during GC (any GC solution has to deal with this). > We can imagine When running industry deployment HDFS, millions of files could > be opened and closed which resulted in a very large number of finalizers > being registered and subsequently being executed. That could cause very long > GC pause times. > We tried to
[jira] [Commented] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093051#comment-15093051 ] Kai Zheng commented on HDFS-9638: - Sounds good to get this focus on the {{DistCp}} documentation improvement as there are so many aspects to update. Thanks Wei-Chiu! > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8584) Support using ramfs partitions on Linux
[ https://issues.apache.org/jira/browse/HDFS-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093075#comment-15093075 ] Hudson commented on HDFS-8584: -- FAILURE: Integrated in Hadoop-trunk-Commit #9087 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9087/]) HDFS-8584. NPE in distcp when ssl configuration file does not exist in (xyao: rev c2e2e134555010ec28da296bcfef4ba2613a5c6c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java > Support using ramfs partitions on Linux > --- > > Key: HDFS-8584 > URL: https://issues.apache.org/jira/browse/HDFS-8584 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: 2.7.0 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > > Now that the bulk of work for HDFS-6919 is complete the memory limit > enforcement uses the {{dfs.datanode.max.locked.memory}} setting and not the > RAM disk free space availability. > We can now use ramfs partitions. This will require fixing the free space > computation and reservation logic for transient volumes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details
[ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated HDFS-9634: - Target Version/s: 3.0.0, 2.8.0 Status: Patch Available (was: Open) [~daryn], [~kihwal], and [~jlowe]: Attached HDFS-9634.001.patch > webhdfs client side exceptions don't provide enough details > --- > > Key: HDFS-9634 > URL: https://issues.apache.org/jira/browse/HDFS-9634 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 2.7.1, 3.0.0, 2.8.0 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: HDFS-9634.001.patch > > > When a WebHDFS client side exception (for example, read timeout) occurs there > are no details beyond the fact that a timeout occurred. Ideally it should say > which node is responsible for the timeout, but failing that it should at > least say which node we're talking to so we can examine that node's logs to > further investigate. > {noformat} > java.net.SocketTimeoutException: Read timed out > at java.net.SocketInputStream.socketRead0(Native Method) > at java.net.SocketInputStream.read(SocketInputStream.java:150) > at java.net.SocketInputStream.read(SocketInputStream.java:121) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.net.www.MeteredStream.read(MeteredStream.java:134) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at > sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035) > at > org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121) > at > org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at > com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58) > at java.io.FilterInputStream.read(FilterInputStream.java:107) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387) > ... 12 more > {noformat} > There are no clues as to which datanode we're talking to nor which datanode > was responsible for the timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details
[ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated HDFS-9634: - Attachment: HDFS-9634.001.patch > webhdfs client side exceptions don't provide enough details > --- > > Key: HDFS-9634 > URL: https://issues.apache.org/jira/browse/HDFS-9634 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0, 2.8.0, 2.7.1 >Reporter: Eric Payne >Assignee: Eric Payne > Attachments: HDFS-9634.001.patch > > > When a WebHDFS client side exception (for example, read timeout) occurs there > are no details beyond the fact that a timeout occurred. Ideally it should say > which node is responsible for the timeout, but failing that it should at > least say which node we're talking to so we can examine that node's logs to > further investigate. > {noformat} > java.net.SocketTimeoutException: Read timed out > at java.net.SocketInputStream.socketRead0(Native Method) > at java.net.SocketInputStream.read(SocketInputStream.java:150) > at java.net.SocketInputStream.read(SocketInputStream.java:121) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at sun.net.www.MeteredStream.read(MeteredStream.java:134) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at > sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035) > at > org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121) > at > org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188) > at java.io.DataInputStream.read(DataInputStream.java:149) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at > com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58) > at java.io.FilterInputStream.read(FilterInputStream.java:107) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57) > at > com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387) > ... 12 more > {noformat} > There are no clues as to which datanode we're talking to nor which datanode > was responsible for the timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9621: Attachment: HDFS-9621.002.branch-2.patch The {{createFileStatus}} change should also be included in branch-2. Upload a patch for branch-2. > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092793#comment-15092793 ] Arpit Agarwal commented on HDFS-9588: - Thanks for the updated patch [~anu]. A few minor comments. # We can probably remove the result code since lack of an exception is success. We take that approach for other responses in the same protocol. {code} message SubmitDiskBalancerPlanResponseProto { enum SubmitResults { OK = 0; // Plan accepted } required SubmitResults result = 1; {code} # Nitpick: indentation looks off. {code} /** * Submit a disk balancer plan for execution */ rpc submitDiskBalancerPlan(SubmitDiskBalancerPlanRequestProto) returns (SubmitDiskBalancerPlanResponseProto); {code} # Should this throw IOException instead of Exception? {code} public long submitDiskBalancerPlan(String planID, long planVersion, long bandwidth, String plan) throws Exception { {code} > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093011#comment-15093011 ] Kai Zheng commented on HDFS-9638: - Good to have this to improve and update the documentation. In the mailing list I had some comments as below. {quote} I read the doc at the following link and regard it as the latest revision that corresponds with the trunk codebase. http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html If that’s right, then we may need to complement it with the following important features because I don’t see they are mentioned in the doc. 1. –diff option, use snapshot diff report to identify the differences between source and target to compute the copying list. 2. –numListstatusThreads option, number of threads to concurrently compute the copying list. 3. –p t, to preserve timestamps. As above features are great things for user to use in order to speed up the time consuming inter or intra cluster sync, not only to add these options in the table of command line options, but also better to document them well as we did for other functions. {quote} Would be good to check and address the questions here as well. Thanks. > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script
[ https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093009#comment-15093009 ] Lei (Eddy) Xu commented on HDFS-9005: - Hi, [~mingma] Thanks a lot for uploading this patch. I have a few questions regarding the JSON file and the code * What is the expected format for the JSON file? E.g., it seems that each DN has a separate JSON object? what is the pros and cons comparing putting them into an JSON array? * As mentioned previously, it is going to put include/exclude files into "all" file. Do we use different section (keys) to determine the included / excluded DNs or using the {{AdminState}}, it'd be nice to clarify that. * In {{HostsFileWriter#includeHosts/excludeHost}}, it seems that each function overwrite the whole conf file? Is it a expected behavior? A few minor issues: * There are a few comments that are not jdoc format. * Could you add more comments to {{HostConfigManager}}, {{DatanodeAdminProperties}} * It seems only changed due to space/indent? Could you revert it? {code} @Override public String apply(@Nullable InetSocketAddress addr) { assert addr != null; return addr.getAddress().getHostAddress() + ":" + addr.getPort(); } })); {code} * It'd be nice to have a default value here. {code} 2725 dfs.namenode.hosts.provider.classname 2726 {code} Thanks! > Provide support for upgrade domain script > - > > Key: HDFS-9005 > URL: https://issues.apache.org/jira/browse/HDFS-9005 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-9005.patch > > > As part of the upgrade domain feature, we need to provide a mechanism to > specify upgrade domain for each datanode. One way to accomplish that is to > allow admins specify an upgrade domain script that takes DN ip or hostname as > input and return the upgrade domain. Then namenode will use it at run time to > set {{DatanodeInfo}}'s upgrade domain string. The configuration can be > something like: > {noformat} > > dfs.namenode.upgrade.domain.script.file.name > /etc/hadoop/conf/upgrade-domain.sh > > {noformat} > just like topology script, -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093032#comment-15093032 ] Hadoop QA commented on HDFS-9629: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 52s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781689/HDFS-9629.01.patch | | JIRA Issue | HDFS-9629 | | Optional Tests | asflicense | | uname | Linux bb38c745f844 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b8942be | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Max memory used | 29MB | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14095/console | This message was automatically generated. > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9639) Inconsistent Logging in BootstrapStandby
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092917#comment-15092917 ] Hadoop QA commented on HDFS-9639: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-9639 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781612/HADOOP-12674.001.patch | | JIRA Issue | HDFS-9639 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14092/console | This message was automatically generated. > Inconsistent Logging in BootstrapStandby > > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9639) Inconsistent Logging in BootstrapStandby
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9639: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1 Committed to trunk, branch-2 and branch-2.8. Thanks for the contribution [~xiaobingo]. > Inconsistent Logging in BootstrapStandby > > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092973#comment-15092973 ] Wei-Chiu Chuang commented on HDFS-9638: --- Additionally, hsftp is deprecated by HDFS-5570. We should also update the documentation. It is unclear if the parameter -mapredSslConf is still valid. > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9640) Remove hsftp from DistCp
[ https://issues.apache.org/jira/browse/HDFS-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9640: -- Description: Per discussion in HDFS-9638, after HDFS-5570, hftp/hsftp are removed from Hadoop 3.0.0. But DistCp still makes reference to hsftp via parameter -mapredSslConf. This parameter would be useless after Hadoop 3.0.0; therefore it should be removed, and then document the changes. This JIRA is intended to track the status of the code/docs change involving the removal of hsftp in DistCp. was: Per discussion in HDFS-9638, after HDFS-5570, hftp/hsftp are removed from Hadoop 3.0.0. But DistCp still makes references to hsftp via parameter -mapredSslConf. This parameter would be useless after Hadoop 3.0.0, and therefore should be removed, and document the changes. This JIRA is intended to track the status of the code/docs change involving the removal of hsftp in DistCp. > Remove hsftp from DistCp > > > Key: HDFS-9640 > URL: https://issues.apache.org/jira/browse/HDFS-9640 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > > Per discussion in HDFS-9638, > after HDFS-5570, hftp/hsftp are removed from Hadoop 3.0.0. But DistCp still > makes reference to hsftp via parameter -mapredSslConf. This parameter would > be useless after Hadoop 3.0.0; therefore it should be removed, and then > document the changes. > This JIRA is intended to track the status of the code/docs change involving > the removal of hsftp in DistCp. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode
[ https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092796#comment-15092796 ] Anu Engineer commented on HDFS-1312: Hi [~andrew.wang], As discussed off-line, let us meet on 14th of Jan, 2016 @ 4:00 - 5:00 PM PST. Here is the meeting info. I look forward to chatting with other Apache members who might be interested in this topic. Anu Engineer is inviting you to a scheduled Zoom meeting. {noformat} Topic: HDFS-1312 discussion Time: Jan 14, 2016 4:00 PM (GMT-8:00) Pacific Time (US and Canada) Join from PC, Mac, Linux, iOS or Android: https://hortonworks.zoom.us/j/267578285 Or join by phone: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll) +1 855 880 1246 (US Toll Free) +1 888 974 9888 (US Toll Free) Meeting ID: 267 578 285 International numbers available: https://hortonworks.zoom.us/zoomconference?m=ZlHRHTGmVEKXzM_RaCyzcSnjlk_z3ovm {noformat} > Re-balance disks within a Datanode > -- > > Key: HDFS-1312 > URL: https://issues.apache.org/jira/browse/HDFS-1312 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode >Reporter: Travis Crawford >Assignee: Anu Engineer > Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf > > > Filing this issue in response to ``full disk woes`` on hdfs-user. > Datanodes fill their storage directories unevenly, leading to situations > where certain disks are full while others are significantly less used. Users > at many different sites have experienced this issue, and HDFS administrators > are taking steps like: > - Manually rebalancing blocks in storage directories > - Decomissioning nodes & later readding them > There's a tradeoff between making use of all available spindles, and filling > disks at the sameish rate. Possible solutions include: > - Weighting less-used disks heavier when placing new blocks on the datanode. > In write-heavy environments this will still make use of all spindles, > equalizing disk use over time. > - Rebalancing blocks locally. This would help equalize disk use as disks are > added/replaced in older cluster nodes. > Datanodes should actively manage their local disk so operator intervention is > not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9639) BootstrapStandby - Inconsistent Logging
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9639: Issue Type: Bug (was: Improvement) > BootstrapStandby - Inconsistent Logging > --- > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9639) Inconsistent Logging in BootstrapStandby
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9639: Summary: Inconsistent Logging in BootstrapStandby (was: BootstrapStandby - Inconsistent Logging) > Inconsistent Logging in BootstrapStandby > > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9588: --- Attachment: HDFS-9588-HDFS-1312.004.patch > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch, > HDFS-9588-HDFS-1312.004.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093039#comment-15093039 ] zuotingbing commented on HDFS-9617: --- Yes i got you, this is a abnormal test scenario, but whatever it should not cause the client dumped with OOM, right? I just want to know why this scenario cause the client OOM, maybe some underlying streams of hdfs have not bean closed? Thank you very much. > my java client use muti-thread to put a same file to a same hdfs uri, after > no lease error,then client OutOfMemoryError > --- > > Key: HDFS-9617 > URL: https://issues.apache.org/jira/browse/HDFS-9617 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zuotingbing > Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease. > Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250] > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy15.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > my java client(JVM -Xmx=2G) : > jmap TOP15: > num #instances #bytes class name > -- >1: 48072 2053976792 [B >2: 458525987568 >3: 458525878944 >4: 33634193112 >5: 33632548168 >6: 27332299008 >7: 5332191696 [Ljava.nio.ByteBuffer; >8: 247332026600 [C >9: 312872002368 > org.apache.hadoop.hdfs.DFSOutputStream$Packet > 10: 31972 767328 java.util.LinkedList$Node > 11: 22845 548280 java.lang.String > 12: 20372 488928 java.util.concurrent.atomic.AtomicLong > 13: 3700 452984 java.lang.Class > 14: 981 439576 > 15: 5583 376344 [S -- This message was sent by Atlassian JIRA
[jira] [Commented] (HDFS-9639) Inconsistent Logging in BootstrapStandby
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092936#comment-15092936 ] Hudson commented on HDFS-9639: -- FAILURE: Integrated in Hadoop-trunk-Commit #9086 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9086/]) HDFS-9639. Inconsistent Logging in BootstrapStandby. (Contributed by (arp: rev de37f37543c2fb07ca53bb6000d50b36ec70d084) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java > Inconsistent Logging in BootstrapStandby > > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Bug > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092995#comment-15092995 ] Hadoop QA commented on HDFS-9588: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 48s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s {color} | {color:red} Patch generated 1 new checkstyle issues in hadoop-hdfs-project (total was 145, now 145). {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 36s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 41s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s {color} | {color:red} Patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 175m 30s {color} | {color:black} {color} | \\ \\ || Reason || Tests
[jira] [Commented] (HDFS-9244) Support nested encryption zones
[ https://issues.apache.org/jira/browse/HDFS-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093013#comment-15093013 ] Xiaoyu Yao commented on HDFS-9244: -- Thanks [~zhz] for working on this. Can we clarify the use cases (in addition to the original one mentioned in the description) before unblocking this? And how often are they being used/requested by the customer deployments. My concern is that this could bring up tricky cases such as upgrade/rollback, trash, etc. to document, support and maintain for nested zones. We don't want to introduce unnecessary complexity unless there are important use cases behind it. Thanks! > Support nested encryption zones > --- > > Key: HDFS-9244 > URL: https://issues.apache.org/jira/browse/HDFS-9244 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiaoyu Yao >Assignee: Zhe Zhang > Attachments: HDFS-9244.00.patch, HDFS-9244.01.patch > > > This JIRA is opened to track adding support of nested encryption zone based > on [~andrew.wang]'s [comment > |https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141] > for certain use cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093043#comment-15093043 ] Wei-Chiu Chuang commented on HDFS-9638: --- I think we should file a separate JIRA to remove -mapredSslConf code and docs entirely from hadoop 3.0.0, and make this JIRA entirely documentation improvement. Because the end of hsftp support is an incompatible change in hadoop 3.0.0 > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HDFS-9639) BootstrapStandby - Inconsistent Logging
[ https://issues.apache.org/jira/browse/HDFS-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal moved HADOOP-12674 to HDFS-9639: -- Affects Version/s: (was: 2.7.1) 2.7.1 Component/s: (was: ha) ha Key: HDFS-9639 (was: HADOOP-12674) Project: Hadoop HDFS (was: Hadoop Common) > BootstrapStandby - Inconsistent Logging > --- > > Key: HDFS-9639 > URL: https://issues.apache.org/jira/browse/HDFS-9639 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ha >Affects Versions: 2.7.1 >Reporter: BELUGA BEHR >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HADOOP-12674.001.patch > > > {code} > /* Line 379 */ > if (LOG.isDebugEnabled()) { > LOG.debug(msg, e); > } else { > LOG.fatal(msg); > } > {code} > Why would message, considered "fatal" under most operating circumstances be > considered "debug" when debugging is on. This is confusing to say the least. > If there is a problem and the user attempts to debug the situation, they may > be filtering on "fatal" messages and miss the exception. > Please consider using only the fatal logging, and including the exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092997#comment-15092997 ] Kai Zheng commented on HDFS-9624: - Hi Yiqun, Looking at the latest patch, it looks fine with two minor issues: * {{Initial the cachedDfsUsedInternalTime larger than sleepInternalTime}}: {{Initial}} should be {{Initialize}}. Similar for other place. * The Jenkins reported checking style issues: there're lines too long. When you get these fixed and updated, please be patient to wait. If you're lucky, this one can be picked up soon by a HDFS committer for the review and commit. Thanks. > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch, HDFS-9624.004.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO >
[jira] [Updated] (HDFS-9629) Update the footer of Web UI to show year 2016
[ https://issues.apache.org/jira/browse/HDFS-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-9629: Attachment: HDFS-9629.01.patch Patch 1 tries to show the year dynamically. Ping [~ajisakaa] and [~brahmareddy] for review and advice on this, since you guys worked on HDFS-8149. Thanks very much in advance! > Update the footer of Web UI to show year 2016 > - > > Key: HDFS-9629 > URL: https://issues.apache.org/jira/browse/HDFS-9629 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiao Chen >Assignee: Xiao Chen > Labels: supportability > Attachments: HDFS-9629.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9047) Retire libwebhdfs
[ https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093085#comment-15093085 ] Colin Patrick McCabe commented on HDFS-9047: I was on vacation so I couldn't comment earlier. Anyway, since {{hadoop-hdfs-native-client}} got merged, I agree that {{libwebhdfs}} is no longer strictly necessary and I withdraw my -1. Thanks, all. > Retire libwebhdfs > - > > Key: HDFS-9047 > URL: https://issues.apache.org/jira/browse/HDFS-9047 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Reporter: Allen Wittenauer >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-9047-branch-2.7.patch, HDFS-9047.000.patch > > > This library is basically a mess: > * It's not part of the mvn package > * It's missing functionality and barely maintained > * It's not in the precommit runs so doesn't get exercised regularly > * It's not part of the unit tests (at least, that I can see) > * It isn't documented in any official documentation > But most importantly: > * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open) > Let's cut our losses and just remove it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9638) Improve DistCp Help and documentation
Wei-Chiu Chuang created HDFS-9638: - Summary: Improve DistCp Help and documentation Key: HDFS-9638 URL: https://issues.apache.org/jira/browse/HDFS-9638 Project: Hadoop HDFS Issue Type: Improvement Components: distcp Affects Versions: 3.0.0 Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Priority: Minor For example, -mapredSslConfConfiguration for ssl config file, to use with hftps:// But this ssl config file should be in the classpath, which is not clearly stated. http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html "When using the hsftp protocol with a source, the security- related properties may be specified in a config-file and passed to DistCp. needs to be in the classpath. " It is also not clear from the context if this ssl_conf_file should be at the client issuing the command. (I think the answer is yes) Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html "The following is an example of the contents of the contents of a SSL Configuration file:" there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9628: - Attachment: HDFS-9628.HDFS-8707.006.patch New patch: move from linking to hdfs_static to individual libs > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch, > HDFS-9628.HDFS-8707.004.patch, HDFS-9628.HDFS-8707.005.patch, > HDFS-9628.HDFS-8707.005.patch, HDFS-9628.HDFS-8707.006.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092790#comment-15092790 ] Hadoop QA commented on HDFS-9628: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} docker {color} | {color:red} 9m 21s {color} | {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781656/HDFS-9628.HDFS-8707.006.patch | | JIRA Issue | HDFS-9628 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14091/console | This message was automatically generated. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch, > HDFS-9628.HDFS-8707.004.patch, HDFS-9628.HDFS-8707.005.patch, > HDFS-9628.HDFS-8707.005.patch, HDFS-9628.HDFS-8707.006.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9244) Support nested encryption zones
[ https://issues.apache.org/jira/browse/HDFS-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-9244: Attachment: HDFS-9244.01.patch Updating the patch to fix test failure in {{TestCryptoAdminCLI}}. It was assuming the old EZ behavior (cannot create nested EZ). > Support nested encryption zones > --- > > Key: HDFS-9244 > URL: https://issues.apache.org/jira/browse/HDFS-9244 > Project: Hadoop HDFS > Issue Type: New Feature > Components: encryption >Reporter: Xiaoyu Yao >Assignee: Zhe Zhang > Attachments: HDFS-9244.00.patch, HDFS-9244.01.patch > > > This JIRA is opened to track adding support of nested encryption zone based > on [~andrew.wang]'s [comment > |https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141] > for certain use cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9588: --- Attachment: (was: HDFS-9588-HDFS-1312.004.patch) > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093025#comment-15093025 ] Wei-Chiu Chuang commented on HDFS-9638: --- Thanks for the suggestion! [~drankye] I looked at trunk, and DistCp.md.vm does not mention -diff, -numListstatus -p[t]. It also does not explain -skipcrccheck well. > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9640) Remove hsftp from DistCp
Wei-Chiu Chuang created HDFS-9640: - Summary: Remove hsftp from DistCp Key: HDFS-9640 URL: https://issues.apache.org/jira/browse/HDFS-9640 Project: Hadoop HDFS Issue Type: Bug Components: distcp Affects Versions: 3.0.0 Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Per discussion in HDFS-9638, after HDFS-5570, hftp/hsftp are removed from Hadoop 3.0.0. But DistCp still makes references to hsftp via parameter -mapredSslConf. This parameter would be useless after Hadoop 3.0.0, and therefore should be removed, and document the changes. This JIRA is intended to track the status of the code/docs change involving the removal of hsftp in DistCp. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9635) Add one more volume choosing policy with considering volume IO load
Yong Zhang created HDFS-9635: Summary: Add one more volume choosing policy with considering volume IO load Key: HDFS-9635 URL: https://issues.apache.org/jira/browse/HDFS-9635 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Yong Zhang Assignee: Yong Zhang We have RoundRobinVolumeChoosingPolicy and AvailableSpaceVolumeChoosingPolicy, but both not consider volume IO load. This jira will add a Add one more volume choosing policy base on how many xceiver count on volume. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.
[ https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093496#comment-15093496 ] Tsz Wo Nicholas Sze commented on HDFS-8999: --- > COMPLETE state used to mean that the number of reported replicas is >= > minReplication, not > 1. ... The previous patch did not change any logic of block COMPLETE state. It checked if #locations > 1 before allowing close a file with COMMITTED blocks. It makes sense to change it to > minReplication. Note that it is > but not >= since we don't want to allow closing file if #locations == minReplication. > Namenode need not wait for {{blockReceived}} for the last block before > completing a file. > - > > Key: HDFS-8999 > URL: https://issues.apache.org/jira/browse/HDFS-8999 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Jitendra Nath Pandey >Assignee: Tsz Wo Nicholas Sze > Attachments: h8999_20151228.patch, h8999_20160106.patch, > h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch > > > This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment > from the jira: > {quote} > ...whether we need to let NameNode wait for all the block_received msgs to > announce the replica is safe. Looking into the code, now we have ># NameNode knows the DataNodes involved when initially setting up the > writing pipeline ># If any DataNode fails during the writing, client bumps the GS and > finally reports all the DataNodes included in the new pipeline to NameNode > through the updatePipeline RPC. ># When the client received the ack for the last packet of the block (and > before the client tries to close the file on NameNode), the replica has been > finalized in all the DataNodes. > Then in this case, when NameNode receives the close request from the client, > the NameNode already knows the latest replicas for the block. Currently the > checkReplication call only counts in all the replicas that NN has already > received the block_received msg, but based on the above #2 and #3, it may be > safe to also count in all the replicas in the > BlockUnderConstructionFeature#replicas? > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9642) Create reader threads pool on demand according to erasure coding policy
[ https://issues.apache.org/jira/browse/HDFS-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093474#comment-15093474 ] Kai Zheng commented on HDFS-9642: - Looks like creating the unused thread pool (with core thread 1, setting {{allowsCoreThreadTimeOut}} true) for replication won't incur much overhead, but creating it using the default value may be problematic. It would still rely on a configuration value to calculate the thread pool size as current code does, since it's not clear how many read tasks will happen in a client. Suggest changing the current configuration item {{THREADPOOL_SIZE_KEY}} to something like {{THREADPOOL_SIZE_FACTOR}}, then the pool size can be calculated as: {{THREADPOOL_SIZE_FACTOR * BLOCKS_IN_ONE_GROUP}}. But then how about changing to use other EC policy with different schema? Need some discussion. > Create reader threads pool on demand according to erasure coding policy > --- > > Key: HDFS-9642 > URL: https://issues.apache.org/jira/browse/HDFS-9642 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kai Zheng >Assignee: Kai Zheng > > While investigating some issue it was noticed in {{DFSClient}}, > {{STRIPED_READ_THREAD_POOL}} will be always created during initialization and > by default regardless the used erasure coding policy it uses the value *18*. > This suggests: > * Create the thread pool on demand only in striping case. > * When create the pool, use a good value respecting the used erasure coding > policy. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9244) Support nested encryption zones
[ https://issues.apache.org/jira/browse/HDFS-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093145#comment-15093145 ] Hadoop QA commented on HDFS-9244: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 11s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 31s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 160m 50s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ca8df7 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781676/HDFS-9244.01.patch | | JIRA Issue | HDFS-9244 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs
[jira] [Updated] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9624: Attachment: (was: HDFS-9624.004.patch) > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data2/current: 289680ms > 2016-01-06 16:10:00,333 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data5/current: 292153ms > 2016-01-06 16:10:05,696 INFO >
[jira] [Updated] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9638: -- Attachment: HDFS-9638.001.patch Rev01: work in progress. Added description of several command line parameters. TODO: check if there are other missing parameter descriptions. > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-9638.001.patch > > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9522) Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport
[ https://issues.apache.org/jira/browse/HDFS-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093137#comment-15093137 ] Kai Zheng commented on HDFS-9522: - Looking at the patch, this looks more like refactoring of the existing codes rather than minor clean up. > Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport > -- > > Key: HDFS-9522 > URL: https://issues.apache.org/jira/browse/HDFS-9522 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HDFS-9522-001.patch, HDFS-9522-002.patch, > HDFS-9522-003.patch, HDFS-9522-004.patch, HDFS-9522-005.patch, > HDFS-9522-006.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > The current DiffReportEntry is a C-style tagged union-like data structure. > Recommend subclass hierarchy as in Java idiom. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9624: Attachment: HDFS-9624.004.patch Thanks for comments. Update the patch, the new default property key name is long, so I don't modify them. > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch, HDFS-9624.004.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data2/current: 289680ms > 2016-01-06 16:10:00,333 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool
[jira] [Updated] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9638: -- Attachment: HDFS-9638.002.patch Rev02: I noticed that HADOOP-11009 did not include the test for preserving timestamp. Not sure if it's appropriate to be include in this JIRA, but here it is: > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-9638.001.patch, HDFS-9638.002.patch > > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093341#comment-15093341 ] Kai Zheng commented on HDFS-9617: - bq. whatever it should not cause the client dumped with OOM You opened 1 threads to upload file, and then ask why it OOMed? This sounds really interesting. You please do yourself a favor investigating it and might not expect others to have the time for this. > my java client use muti-thread to put a same file to a same hdfs uri, after > no lease error,then client OutOfMemoryError > --- > > Key: HDFS-9617 > URL: https://issues.apache.org/jira/browse/HDFS-9617 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zuotingbing > Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease. > Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250] > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy15.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > my java client(JVM -Xmx=2G) : > jmap TOP15: > num #instances #bytes class name > -- >1: 48072 2053976792 [B >2: 458525987568 >3: 458525878944 >4: 33634193112 >5: 33632548168 >6: 27332299008 >7: 5332191696 [Ljava.nio.ByteBuffer; >8: 247332026600 [C >9: 312872002368 > org.apache.hadoop.hdfs.DFSOutputStream$Packet > 10: 31972 767328 java.util.LinkedList$Node > 11: 22845 548280 java.lang.String > 12: 20372 488928 java.util.concurrent.atomic.AtomicLong > 13: 3700 452984 java.lang.Class > 14: 981 439576 > 15: 5583 376344 [S -- This message was sent by Atlassian JIRA
[jira] [Resolved] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zuotingbing resolved HDFS-9617. --- Resolution: Invalid > my java client use muti-thread to put a same file to a same hdfs uri, after > no lease error,then client OutOfMemoryError > --- > > Key: HDFS-9617 > URL: https://issues.apache.org/jira/browse/HDFS-9617 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zuotingbing > Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease. > Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250] > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy15.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > my java client(JVM -Xmx=2G) : > jmap TOP15: > num #instances #bytes class name > -- >1: 48072 2053976792 [B >2: 458525987568 >3: 458525878944 >4: 33634193112 >5: 33632548168 >6: 27332299008 >7: 5332191696 [Ljava.nio.ByteBuffer; >8: 247332026600 [C >9: 312872002368 > org.apache.hadoop.hdfs.DFSOutputStream$Packet > 10: 31972 767328 java.util.LinkedList$Node > 11: 22845 548280 java.lang.String > 12: 20372 488928 java.util.concurrent.atomic.AtomicLong > 13: 3700 452984 java.lang.Class > 14: 981 439576 > 15: 5583 376344 [S -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093383#comment-15093383 ] Tsz Wo Nicholas Sze commented on HDFS-9621: --- +1 the branch-2 patch looks good. > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 3.0.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093234#comment-15093234 ] Hadoop QA commented on HDFS-9588: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 24s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s {color} | {color:green} HDFS-1312 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s {color} | {color:green} HDFS-1312 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s {color} | {color:green} HDFS-1312 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s {color} | {color:red} Patch generated 1 new checkstyle issues in hadoop-hdfs-project (total was 145, now 145). {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 16s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s {color} | {color:green} hadoop-hdfs-client in the patch passed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 17s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s {color} | {color:red} Patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 174m 24s {color} | {color:black} {color} | \\ \\ || Reason ||
[jira] [Commented] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.
[ https://issues.apache.org/jira/browse/HDFS-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093265#comment-15093265 ] Surendra Singh Lilhore commented on HDFS-9584: -- Thanks [~xyao] for review and commit.. Thanks [~jojochuang] for review... > NPE in distcp when ssl configuration file does not exist in class path. > --- > > Key: HDFS-9584 > URL: https://issues.apache.org/jira/browse/HDFS-9584 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Labels: supportability > Fix For: 2.8.0 > > Attachments: HDFS-9584.001.patch, HDFS-9584.patch, HDFS-9584.patch > > > {noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml > hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat} > if {{ssl-distcp.xml}} file not exist in class path, distcp will throw > NullPointerException. > {code} > java.lang.NullPointerException > at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266) > at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:431) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9588: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-1312 Release Note: +1. Jenkins failures are unrelated to the patch. I committed this to the feature branch. Thanks [~anu]. Target Version/s: (was: HDFS-1312) Status: Resolved (was: Patch Available) > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch, > HDFS-9588-HDFS-1312.004.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9621: Fix Version/s: (was: 3.0.0) 2.9.0 > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 2.9.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9624: Attachment: (was: HDFS-9624.004.patch) > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data2/current: 289680ms > 2016-01-06 16:10:00,333 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data5/current: 292153ms > 2016-01-06 16:10:05,696 INFO >
[jira] [Commented] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.
[ https://issues.apache.org/jira/browse/HDFS-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093202#comment-15093202 ] Xiaoyu Yao commented on HDFS-9584: -- Thanks [~jojochuang]! I've corrected the commit message. > NPE in distcp when ssl configuration file does not exist in class path. > --- > > Key: HDFS-9584 > URL: https://issues.apache.org/jira/browse/HDFS-9584 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Labels: supportability > Fix For: 2.8.0 > > Attachments: HDFS-9584.001.patch, HDFS-9584.patch, HDFS-9584.patch > > > {noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml > hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat} > if {{ssl-distcp.xml}} file not exist in class path, distcp will throw > NullPointerException. > {code} > java.lang.NullPointerException > at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266) > at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:431) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9641) IOException in hdfs write process causes file leases not released
Yongtao Yang created HDFS-9641: -- Summary: IOException in hdfs write process causes file leases not released Key: HDFS-9641 URL: https://issues.apache.org/jira/browse/HDFS-9641 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.6.3, 2.6.2, 2.6.1, 2.6.0 Environment: hadoop 2.6.0, Reporter: Yongtao Yang when writing a file, an IOException may be raised in DFSOutputStream.DataStreamer.run(), then 'streamerClosed' may be set to true, then closeInternal() will be invoked, where DFSOutputStream.closed will be set to be true. That is to say, 'closed' is true before DFSOutputStream.close() is invoked, then dfsClient.endFileLease(fileId) will not be executed. The references of the DFSOutputStream objects will still be hold in DFSClient.filesBeingWritten untill the client quits. The related resources will not be released. HDFS-4504 is a related issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9642) Create reader threads pool on demand according to erasure coding policy
Kai Zheng created HDFS-9642: --- Summary: Create reader threads pool on demand according to erasure coding policy Key: HDFS-9642 URL: https://issues.apache.org/jira/browse/HDFS-9642 Project: Hadoop HDFS Issue Type: Improvement Reporter: Kai Zheng Assignee: Kai Zheng While investigating some issue it was noticed in {{DFSClient}}, {{STRIPED_READ_THREAD_POOL}} will be always created during initialization and by default regardless the used erasure coding policy it uses the value *18*. This suggests: * Create the thread pool on demand only in striping case. * When create the pool, use a good value respecting the used erasure coding policy. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093197#comment-15093197 ] Hadoop QA commented on HDFS-9624: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-9624 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781708/HDFS-9624.004.patch | | JIRA Issue | HDFS-9624 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14097/console | This message was automatically generated. > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch, HDFS-9624.004.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO >
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-9588: Release Note: (was: +1. Jenkins failures are unrelated to the patch. I committed this to the feature branch. Thanks [~anu].) > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch, > HDFS-9588-HDFS-1312.004.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093273#comment-15093273 ] Arpit Agarwal commented on HDFS-9588: - +1. Jenkins failures are unrelated to the patch. I committed this to the feature branch. Thanks [~anu]. > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Fix For: HDFS-1312 > > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch, > HDFS-9588-HDFS-1312.004.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8762) Erasure Coding: the log of each streamer should show its index
[ https://issues.apache.org/jira/browse/HDFS-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Bo updated HDFS-8762: Resolution: Duplicate Status: Resolved (was: Patch Available) Other jiras have added the necessary index. > Erasure Coding: the log of each streamer should show its index > -- > > Key: HDFS-8762 > URL: https://issues.apache.org/jira/browse/HDFS-8762 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Li Bo >Assignee: Li Bo > Attachments: HDFS-8762-HDFS-7285-001.patch, > HDFS-8762-HDFS-7285-002.patch > > > The log in {{DataStreamer}} doesn't show which streamer it's generated from. > In order to make log information more convenient for debugging, each log > should include the index of the streamer it's generated from. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Yiqun updated HDFS-9624: Attachment: HDFS-9624.004.patch > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch, HDFS-9624.004.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data2/current: 289680ms > 2016-01-06 16:10:00,333 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data5/current: 292153ms > 2016-01-06
[jira] [Updated] (HDFS-9638) Improve DistCp Help and documentation
[ https://issues.apache.org/jira/browse/HDFS-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9638: -- Affects Version/s: (was: 3.0.0) 2.7.1 > Improve DistCp Help and documentation > - > > Key: HDFS-9638 > URL: https://issues.apache.org/jira/browse/HDFS-9638 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: supportability > Attachments: HDFS-9638.001.patch > > > For example, > -mapredSslConfConfiguration for ssl config file, to use with > hftps:// > But this ssl config file should be in the classpath, which is not clearly > stated. > http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "When using the hsftp protocol with a source, the security- related > properties may be specified in a config-file and passed to DistCp. > needs to be in the classpath. " > It is also not clear from the context if this ssl_conf_file should be at the > client issuing the command. (I think the answer is yes) > Also, in: http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html > "The following is an example of the contents of the contents of a SSL > Configuration file:" > there's an extra "of the contents of the contents " -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.
[ https://issues.apache.org/jira/browse/HDFS-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093226#comment-15093226 ] Hudson commented on HDFS-9584: -- FAILURE: Integrated in Hadoop-trunk-Commit #9088 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9088/]) Correct commit message for HDFS-9584 (xyao: rev 103d3cfc4ee1ac21970fd6bbca54bb085ab771ba) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > NPE in distcp when ssl configuration file does not exist in class path. > --- > > Key: HDFS-9584 > URL: https://issues.apache.org/jira/browse/HDFS-9584 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Labels: supportability > Fix For: 2.8.0 > > Attachments: HDFS-9584.001.patch, HDFS-9584.patch, HDFS-9584.patch > > > {noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml > hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat} > if {{ssl-distcp.xml}} file not exist in class path, distcp will throw > NullPointerException. > {code} > java.lang.NullPointerException > at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266) > at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250) > at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175) > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) > at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.tools.DistCp.main(DistCp.java:431) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093331#comment-15093331 ] Kai Zheng commented on HDFS-9617: - Why you reopened this issue? Please note JIRA is not a place to answer your questions. As suggested above, please move to user/dev mailing list. Please close it. > my java client use muti-thread to put a same file to a same hdfs uri, after > no lease error,then client OutOfMemoryError > --- > > Key: HDFS-9617 > URL: https://issues.apache.org/jira/browse/HDFS-9617 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zuotingbing > Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease. > Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250] > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy15.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > my java client(JVM -Xmx=2G) : > jmap TOP15: > num #instances #bytes class name > -- >1: 48072 2053976792 [B >2: 458525987568 >3: 458525878944 >4: 33634193112 >5: 33632548168 >6: 27332299008 >7: 5332191696 [Ljava.nio.ByteBuffer; >8: 247332026600 [C >9: 312872002368 > org.apache.hadoop.hdfs.DFSOutputStream$Packet > 10: 31972 767328 java.util.LinkedList$Node > 11: 22845 548280 java.lang.String > 12: 20372 488928 java.util.concurrent.atomic.AtomicLong > 13: 3700 452984 java.lang.Class > 14: 981 439576 > 15: 5583 376344 [S -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work stopped] (HDFS-8171) Extend BlockSender to support multiple block data source
[ https://issues.apache.org/jira/browse/HDFS-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-8171 stopped by Li Bo. --- > Extend BlockSender to support multiple block data source > > > Key: HDFS-8171 > URL: https://issues.apache.org/jira/browse/HDFS-8171 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Li Bo >Assignee: Li Bo > > Currently BlockSender reads a block from the disk and sends it to a remote > datanode. In EC encode/decode work, new blocks are generated by calculation. > In order to store these blocks to remote datanodes, we can ask BlockSender to > read data from the output of encode/decode calculation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script
[ https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093180#comment-15093180 ] Ming Ma commented on HDFS-9005: --- Thanks [~eddyxu] for the review! Regarding the main question around JSON format, I like flat list of objects better than grouping it by exclude attribute as that is just one of several properties(although it is the most frequently changed property). Ease of editing shouldn't be an issue as it is likely this file will be written by high-level tool. Performance wise, reload time doesn't seem to have much difference among different formats. When you said JSON array, do you mean putting list of DN objects under a top-level element? BTW, the format used in the patch is similar to rumen and yarn scheduler simulator job trace formats; not that HDFS needs to be the same; just FYI. Otherwise, I am open to any formats. For {{HostsFileWriter#includeHosts/excludeHost}}, it does overwrite the whole files. Maybe it should be renamed to initIncludeHosts/initExcludeHost. So far the test code doesn't need addIncludeHosts/addExcludeHost. After we agree on the actual JSON file format, I will update the patch with the other suggestions you have. > Provide support for upgrade domain script > - > > Key: HDFS-9005 > URL: https://issues.apache.org/jira/browse/HDFS-9005 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-9005.patch > > > As part of the upgrade domain feature, we need to provide a mechanism to > specify upgrade domain for each datanode. One way to accomplish that is to > allow admins specify an upgrade domain script that takes DN ip or hostname as > input and return the upgrade domain. Then namenode will use it at run time to > set {{DatanodeInfo}}'s upgrade domain string. The configuration can be > something like: > {noformat} > > dfs.namenode.upgrade.domain.script.file.name > /etc/hadoop/conf/upgrade-domain.sh > > {noformat} > just like topology script, -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError
[ https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zuotingbing reopened HDFS-9617: --- > my java client use muti-thread to put a same file to a same hdfs uri, after > no lease error,then client OutOfMemoryError > --- > > Key: HDFS-9617 > URL: https://issues.apache.org/jira/browse/HDFS-9617 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zuotingbing > Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java > > > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): > No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease. > Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250] > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy14.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391) > at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy15.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536) > my java client(JVM -Xmx=2G) : > jmap TOP15: > num #instances #bytes class name > -- >1: 48072 2053976792 [B >2: 458525987568 >3: 458525878944 >4: 33634193112 >5: 33632548168 >6: 27332299008 >7: 5332191696 [Ljava.nio.ByteBuffer; >8: 247332026600 [C >9: 312872002368 > org.apache.hadoop.hdfs.DFSOutputStream$Packet > 10: 31972 767328 java.util.LinkedList$Node > 11: 22845 548280 java.lang.String > 12: 20372 488928 java.util.concurrent.atomic.AtomicLong > 13: 3700 452984 java.lang.Class > 14: 981 439576 > 15: 5583 376344 [S -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9621: Affects Version/s: (was: 3.0.0) > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 2.9.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093419#comment-15093419 ] Jing Zhao commented on HDFS-9621: - Thanks Nicholas for reviewing the branch-2 patch! I've committed it. > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 2.9.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093449#comment-15093449 ] Hudson commented on HDFS-9621: -- FAILURE: Integrated in Hadoop-trunk-Commit #9092 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9092/]) move HDFS-9621 from trunk to 2.9.0 in CHANGES.txt. (jing9: rev 17158647f8b7cda96bfbf82a78d543befac1f01c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Fix For: 2.9.0 > > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.branch-2.patch, HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC
[ https://issues.apache.org/jira/browse/HDFS-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-9588: --- Attachment: HDFS-9588-HDFS-1312.003.patch Hi [~arpitagarwal], Thanks for the review. I have updated patch based on your comments. bq.Missing check for presence of maxDiskBandwidth when deserializing the protobuf message. fixed. bq. Instead of enumerating the error codes in SubmitDiskBalancerPlanResponseProto can we just have the DataNode throw appropriate exceptions, when the server-side implementation is done? Added a new exception called diskbalancerException, will communicate all errors using that. bq. Missing Javadoc for ClientDatanodeProtocol#submitDiskBalancerPlan. Fixed, will document Plan ID properly in the next patch, since that is the one that really uses it. bq. pick: Should DataNode#submitDiskBalancerPlan throw NotImplementedException until it is implemented, instead of returning OK? fixed bq. Is the plan version used currently? yes, it is left for future changes to plan json. bq. Stylistic point - we can make most of the protobuf fields optional? Fixed, converted some more fields to optional, left some as required since without them the protocol would not make sense. > DiskBalancer : Add submitDiskbalancer RPC > - > > Key: HDFS-9588 > URL: https://issues.apache.org/jira/browse/HDFS-9588 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Affects Versions: HDFS-1312 >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9588-HDFS-1312.001.patch, > HDFS-9588-HDFS-1312.002.patch, HDFS-9588-HDFS-1312.003.patch > > > Add a data node RPC that allows client to submit a diskbalancer plan to data > node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9637) Add test for HADOOP-12702
Daniel Templeton created HDFS-9637: -- Summary: Add test for HADOOP-12702 Key: HDFS-9637 URL: https://issues.apache.org/jira/browse/HDFS-9637 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 2.7.1 Reporter: Daniel Templeton Assignee: Daniel Templeton Per discussion on the dev list, the tests for the new FileSystemSink class should be added to the HDFS project to avoid creating a dependency for the common project on the HDFS project. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092609#comment-15092609 ] Hadoop QA commented on HDFS-9628: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s {color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s {color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 30s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 7s {color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 12s {color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 28s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0cf5e66 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781618/HDFS-9628.HDFS-8707.005.patch | | JIRA Issue | HDFS-9628 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux 04d80b18be3f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 1732e7f | | Default Java | 1.7.0_91 | | Multi-JDK versions |
[jira] [Commented] (HDFS-9624) DataNode start slowly due to the initial DU command operations
[ https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15091955#comment-15091955 ] Lin Yiqun commented on HDFS-9624: - hi [~drankye], could you have time to review my latest patch again? > DataNode start slowly due to the initial DU command operations > -- > > Key: HDFS-9624 > URL: https://issues.apache.org/jira/browse/HDFS-9624 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Lin Yiqun >Assignee: Lin Yiqun > Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, > HDFS-9624.003.patch, HDFS-9624.004.patch > > > It seems starting datanode so slowly when I am finishing migration of > datanodes and restart them.I look the dn logs: > {code} > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4 > 2016-01-06 16:05:08,118 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added > volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK > 2016-01-06 16:05:08,176 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: > Registered FSDatasetState MBean > 2016-01-06 16:05:08,177 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 > 2016-01-06 16:05:08,178 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data2/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data3/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data4/current... > 2016-01-06 16:05:08,179 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data5/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data6/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data7/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data8/current... > 2016-01-06 16:05:08,180 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data9/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data10/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data11/current... > 2016-01-06 16:05:08,181 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning > block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume > /home/data/data/hadoop/dfs/data/data12/current... > 2016-01-06 16:09:49,646 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data7/current: 281466ms > 2016-01-06 16:09:54,235 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data9/current: 286054ms > 2016-01-06 16:09:57,859 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on > /home/data/data/hadoop/dfs/data/data2/current: 289680ms > 2016-01-06 16:10:00,333 INFO > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time > taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on >
[jira] [Updated] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.
[ https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-8999: -- Attachment: h8999_20160111.patch h8999_20160111.patch: - The new behavior is configurable. The default is disabled. - Allow closing a file with multiple COMMITTED blocks. - Use minReplication instead of 1. > Namenode need not wait for {{blockReceived}} for the last block before > completing a file. > - > > Key: HDFS-8999 > URL: https://issues.apache.org/jira/browse/HDFS-8999 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Jitendra Nath Pandey >Assignee: Tsz Wo Nicholas Sze > Attachments: h8999_20151228.patch, h8999_20160106.patch, > h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch > > > This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment > from the jira: > {quote} > ...whether we need to let NameNode wait for all the block_received msgs to > announce the replica is safe. Looking into the code, now we have ># NameNode knows the DataNodes involved when initially setting up the > writing pipeline ># If any DataNode fails during the writing, client bumps the GS and > finally reports all the DataNodes included in the new pipeline to NameNode > through the updatePipeline RPC. ># When the client received the ack for the last packet of the block (and > before the client tries to close the file on NameNode), the replica has been > finalized in all the DataNodes. > Then in this case, when NameNode receives the close request from the client, > the NameNode already knows the latest replicas for the block. Currently the > checkReplication call only counts in all the replicas that NN has already > received the block_received msg, but based on the above #2 and #3, it may be > safe to also count in all the replicas in the > BlockUnderConstructionFeature#replicas? > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9636) libhdfs++: for consistency, include files should be in hdfspp
[ https://issues.apache.org/jira/browse/HDFS-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092259#comment-15092259 ] Hadoop QA commented on HDFS-9636: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 45s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s {color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 53s {color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} HDFS-8707 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 56s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 58s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 39s {color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 48s {color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 22s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0cf5e66 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781589/HDFS-9636.HDFS-8707.000.patch | | JIRA Issue | HDFS-9636 | | Optional Tests | asflicense compile cc mvnsite javac unit | | uname | Linux 368aa8b5e044 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-8707 / 1732e7f | | Default Java | 1.7.0_91 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/14084/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/14084/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_91.txt | | JDK v1.7.0_91 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/14084/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Max memory used | 75MB | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output |
[jira] [Updated] (HDFS-9636) libhdfs++: for consistency, include files should be in hdfspp
[ https://issues.apache.org/jira/browse/HDFS-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9636: - Status: Patch Available (was: Open) > libhdfs++: for consistency, include files should be in hdfspp > - > > Key: HDFS-9636 > URL: https://issues.apache.org/jira/browse/HDFS-9636 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen > Attachments: HDFS-9636.HDFS-8707.000.patch > > > The existing hdfs library resides in hdfs/hdfs.h. To maintain Least > Astonishment, we should move the libhdfspp files into hdfspp/hdfspp.h > (they're currently in the libhdfspp/ directory). > Likewise, the install step in the root directory should put the include files > in /include/hdfspp and include/hdfs (it currently erroneously puts the hdfs > file into libhdfs/) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9628: - Attachment: HDFS-9628.HDFS-8707.004.patch New patch: added additional cmake flags for better output > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch, > HDFS-9628.HDFS-8707.004.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory
[ https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092228#comment-15092228 ] Tsz Wo Nicholas Sze commented on HDFS-9621: --- +1 patch looks good > getListing wrongly associates Erasure Coding policy to pre-existing > replicated files under an EC directory > > > Key: HDFS-9621 > URL: https://issues.apache.org/jira/browse/HDFS-9621 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0 >Reporter: Sushmitha Sreenivasan >Assignee: Jing Zhao >Priority: Critical > Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, > HDFS-9621.002.patch > > > This is reported by [~ssreenivasan]: > If we set Erasure Coding policy to a directory which contains some files with > replicated blocks, later when listing files under the directory these files > will be reported as EC files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9631) Restarting namenode after deleting a directory with snapshot will fail
[ https://issues.apache.org/jira/browse/HDFS-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092245#comment-15092245 ] Wei-Chiu Chuang commented on HDFS-9631: --- Thanks [~kihwal] for the analysis. Yes, it does look like it got stuck in safe mode. I'll add more logs to figure out what went wrong. > Restarting namenode after deleting a directory with snapshot will fail > -- > > Key: HDFS-9631 > URL: https://issues.apache.org/jira/browse/HDFS-9631 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > > I found a number of {{TestOpenFilesWithSnapshot}} tests failed quite > frequently. > {noformat} > FAILED: > org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot > Error Message: > Timed out waiting for Mini HDFS Cluster to start > Stack Trace: > java.io.IOException: Timed out waiting for Mini HDFS Cluster to start > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985) > at > org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:82) > {noformat} > These tests ({{testParentDirWithUCFileDeleteWithSnapshot}}, > {{testOpenFilesWithRename}}, {{testWithCheckpoint}}) are unable to reconnect > to the namenode after restart. It looks like the reconnection failed due to > an EOFException when BPServiceActor sends a heartbeat. > {noformat} > 2016-01-07 23:25:43,678 [main] WARN hdfs.MiniDFSCluster > (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster > to start... > 2016-01-07 23:25:44,679 [main] WARN hdfs.MiniDFSCluster > (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster > to start... > 2016-01-07 23:25:44,720 [DataNode: > [[[DISK]file:/home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, > [DISK]file: > /home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]] > heartbeating to localhost/127.0.0.1:60472] WARN datanode > .DataNode (BPServiceActor.java:offerService(752)) - IOException in > offerService > java.io.EOFException: End of File Exception between local host is: > "weichiu.vpc.cloudera.com/172.28.211.219"; destination host is: > "localhost":6047 > 2; :; For more details see: http://wiki.apache.org/hadoop/EOFException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:793) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:766) > at org.apache.hadoop.ipc.Client.call(Client.java:1452) > at org.apache.hadoop.ipc.Client.call(Client.java:1385) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy18.sendHeartbeat(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:154) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:557) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:660) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:851) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:392) > at > org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1110) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1005) > {noformat} > It appears that these three tests all call {{doWriteAndAbort()}}, which > creates files and then abort, and then set the parent directory with a > snapshot, and then delete the parent directory. > Interestingly, if the parent directory does not have a snapshot, the tests > will not fail. Additionally, if the parent directory is not deleted, the > tests will not fail. > The following test will fail intermittently: > {code:java} > public void testDeleteParentDirWithSnapShot()
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092212#comment-15092212 ] Hadoop QA commented on HDFS-9628: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} docker {color} | {color:red} 8m 39s {color} | {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781590/HDFS-9628.HDFS-8707.003.patch | | JIRA Issue | HDFS-9628 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14085/console | This message was automatically generated. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8999) Namenode need not wait for {{blockReceived}} for the last block before completing a file.
[ https://issues.apache.org/jira/browse/HDFS-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092262#comment-15092262 ] Tsz Wo Nicholas Sze commented on HDFS-8999: --- Will add some tests for the new behavior. > Namenode need not wait for {{blockReceived}} for the last block before > completing a file. > - > > Key: HDFS-8999 > URL: https://issues.apache.org/jira/browse/HDFS-8999 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Jitendra Nath Pandey >Assignee: Tsz Wo Nicholas Sze > Attachments: h8999_20151228.patch, h8999_20160106.patch, > h8999_20160106b.patch, h8999_20160106c.patch, h8999_20160111.patch > > > This comes out of a discussion in HDFS-8763. Pasting [~jingzhao]'s comment > from the jira: > {quote} > ...whether we need to let NameNode wait for all the block_received msgs to > announce the replica is safe. Looking into the code, now we have ># NameNode knows the DataNodes involved when initially setting up the > writing pipeline ># If any DataNode fails during the writing, client bumps the GS and > finally reports all the DataNodes included in the new pipeline to NameNode > through the updatePipeline RPC. ># When the client received the ack for the last packet of the block (and > before the client tries to close the file on NameNode), the replica has been > finalized in all the DataNodes. > Then in this case, when NameNode receives the close request from the client, > the NameNode already knows the latest replicas for the block. Currently the > checkReplication call only counts in all the replicas that NN has already > received the block_received msg, but based on the above #2 and #3, it may be > safe to also count in all the replicas in the > BlockUnderConstructionFeature#replicas? > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092226#comment-15092226 ] Hadoop QA commented on HDFS-9628: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 37s {color} | {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781593/HDFS-9628.HDFS-8707.004.patch | | JIRA Issue | HDFS-9628 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14086/console | This message was automatically generated. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch, > HDFS-9628.HDFS-8707.004.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9628: - Attachment: HDFS-9628.HDFS-8707.005.patch New patch: explicitly free protobuf data in builder test. Also, include CTest output on failure option. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch, > HDFS-9628.HDFS-8707.004.patch, HDFS-9628.HDFS-8707.005.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9628: - Attachment: HDFS-9628.HDFS-8707.003.patch Rebased on trunk to remove conflict in hdfs_ext.h > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092141#comment-15092141 ] Hadoop QA commented on HDFS-9628: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} docker {color} | {color:red} 8m 26s {color} | {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12781581/HDFS-9628.HDFS-8707.003.patch | | JIRA Issue | HDFS-9628 | | Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/14083/console | This message was automatically generated. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092191#comment-15092191 ] James Clampffer commented on HDFS-9628: --- Thanks for the updates and rebase Bob! It looks like the unit test is failing under valgrind because it doesn't have the {code} google::protobuf::ShutdownProtobufLibrary(); {code} call at the end of the tests. If fixing that gets a good CI run I'll +1. Everything else looks solid. > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9493) Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092146#comment-15092146 ] Tony Wu commented on HDFS-9493: --- Thanks [~liuml07] & [~eddyxu] for the review and comments! > Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk > --- > > Key: HDFS-9493 > URL: https://issues.apache.org/jira/browse/HDFS-9493 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Mingliang Liu >Assignee: Tony Wu > Fix For: 2.8.0 > > Attachments: HDFS-9493.001.patch, HDFS-9493.002.patch, > HDFS-9493.003.patch > > > Tested in both Gentoo Linux and Mac. > {quote} > --- > T E S T S > --- > Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave > Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.159 sec > <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestMetaSave > testMetasaveAfterDelete(org.apache.hadoop.hdfs.server.namenode.TestMetaSave) > Time elapsed: 15.318 sec <<< FAILURE! > java.lang.AssertionError: null > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hdfs.server.namenode.TestMetaSave.testMetasaveAfterDelete(TestMetaSave.java:154) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings
[ https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9628: - Attachment: HDFS-9628.HDFS-8707.003.patch > libhdfs++: Implement builder apis from C bindings > - > > Key: HDFS-9628 > URL: https://issues.apache.org/jira/browse/HDFS-9628 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen >Assignee: Bob Hansen > Attachments: HDFS-9628.HDFS-8707.000.patch, > HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch, > HDFS-9628.HDFS-8707.003.patch, HDFS-9628.HDFS-8707.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9636) libhdfs++: for consistency, include files should be in hdfspp
Bob Hansen created HDFS-9636: Summary: libhdfs++: for consistency, include files should be in hdfspp Key: HDFS-9636 URL: https://issues.apache.org/jira/browse/HDFS-9636 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bob Hansen The existing hdfs library resides in hdfs/hdfs.h. To maintain Least Astonishment, we should move the libhdfspp files into hdfspp/hdfspp.h (they're currently in the libhdfspp/ directory). Likewise, the install step in the root directory should put the include files in /include/hdfspp and include/hdfs (it currently erroneously puts the hdfs file into libhdfs/) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9636) libhdfs++: for consistency, include files should be in hdfspp
[ https://issues.apache.org/jira/browse/HDFS-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9636: - Attachment: HDFS-9636.HDFS-8707.000.patch Renamed include/libhdfspp to include/hdfspp. Fixed up install paths > libhdfs++: for consistency, include files should be in hdfspp > - > > Key: HDFS-9636 > URL: https://issues.apache.org/jira/browse/HDFS-9636 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Bob Hansen > Attachments: HDFS-9636.HDFS-8707.000.patch > > > The existing hdfs library resides in hdfs/hdfs.h. To maintain Least > Astonishment, we should move the libhdfspp files into hdfspp/hdfspp.h > (they're currently in the libhdfspp/ directory). > Likewise, the install step in the root directory should put the include files > in /include/hdfspp and include/hdfs (it currently erroneously puts the hdfs > file into libhdfs/) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9569) Log the name of the fsimage being loaded for better supportability
[ https://issues.apache.org/jira/browse/HDFS-9569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15092374#comment-15092374 ] Chris Nauroth commented on HDFS-9569: - Hi [~yzhangal]. I had +1'd this. Were you planning to commit it, or would you like me to do it? > Log the name of the fsimage being loaded for better supportability > -- > > Key: HDFS-9569 > URL: https://issues.apache.org/jira/browse/HDFS-9569 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Trivial > Labels: supportability > Fix For: 2.7.3 > > Attachments: HDFS-9569.001.patch, HDFS-9569.002.patch, > HDFS-9569.003.patch, HDFS-9569.004.patch, HDFS-9569.005.patch > > > When NN starts to load fsimage, it does > {code} > void loadFSImageFile(FSNamesystem target, MetaRecoveryContext recovery, > FSImageFile imageFile, StartupOption startupOption) throws IOException { > LOG.debug("Planning to load image :\n" + imageFile); > .. > long txId = loader.getLoadedImageTxId(); > LOG.info("Loaded image for txid " + txId + " from " + curFile); > {code} > A debug msg is issued at the beginning with the fsimage file name, then at > the end an info msg is issued after loading. > If the fsimage loading failed due to corrupted fsimage (see HDFS-9406), we > don't see the first msg. It'd be helpful to always be able to see from NN > logs what fsimage file it's loading. > Two improvements: > 1. Change the above debug to info > 2. If exception happens when loading fsimage, be sure to report the fsimage > name being loaded in the error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9630) DistCp minor refactoring and clean up
[ https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-9630: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I just committed the patch to trunk, branch-2, and branch-2.8. Thanks Kai for the work! > DistCp minor refactoring and clean up > - > > Key: HDFS-9630 > URL: https://issues.apache.org/jira/browse/HDFS-9630 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Kai Zheng >Assignee: Kai Zheng >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch > > > While working on HDFS-9613, it was found there are various checking style > issues and minor things to clean up in {{DistCp}}. Better to handle them > separately so the fix can be in earlier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9630) DistCp minor refactoring and clean up
[ https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-9630: Affects Version/s: 2.7.1 Target Version/s: 2.8.0 > DistCp minor refactoring and clean up > - > > Key: HDFS-9630 > URL: https://issues.apache.org/jira/browse/HDFS-9630 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Kai Zheng >Assignee: Kai Zheng >Priority: Minor > Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch > > > While working on HDFS-9613, it was found there are various checking style > issues and minor things to clean up in {{DistCp}}. Better to handle them > separately so the fix can be in earlier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9630) DistCp minor refactoring and clean up
[ https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-9630: Target Version/s: 3.0.0 (was: 2.8.0) > DistCp minor refactoring and clean up > - > > Key: HDFS-9630 > URL: https://issues.apache.org/jira/browse/HDFS-9630 > Project: Hadoop HDFS > Issue Type: Improvement > Components: distcp >Affects Versions: 2.7.1 >Reporter: Kai Zheng >Assignee: Kai Zheng >Priority: Minor > Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch > > > While working on HDFS-9613, it was found there are various checking style > issues and minor things to clean up in {{DistCp}}. Better to handle them > separately so the fix can be in earlier. -- This message was sent by Atlassian JIRA (v6.3.4#6332)