[jira] [Commented] (HDFS-13347) RBF: Cache datanode reports
[ https://issues.apache.org/jira/browse/HDFS-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423511#comment-16423511 ] Xiao Chen commented on HDFS-13347: -- Thanks all for the work here. This seems to have broken branch-2 compilation though: {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-hdfs-rbf: Compilation failure: Compilation failure: [ERROR] hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java:[34,26] package java.util.function does not exist [ERROR] hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java:[35,24] package java.util.stream does not exist {noformat} > RBF: Cache datanode reports > --- > > Key: HDFS-13347 > URL: https://issues.apache.org/jira/browse/HDFS-13347 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2 > > Attachments: HDFS-13347-branch-2.000.patch, HDFS-13347.000.patch, > HDFS-13347.001.patch, HDFS-13347.002.patch, HDFS-13347.003.patch, > HDFS-13347.004.patch, HDFS-13347.005.patch, HDFS-13347.006.patch > > > Getting the datanode reports is an expensive operation and can be executed > very frequently by the UI and watchdogs. We should cache this information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
[ https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423498#comment-16423498 ] Shashikant Banerjee commented on HDFS-13383: Thanks [~msingh], for working on this. I just applied the patch and ran oz script to start KSM ,SCM etc... The issue still seems to exist there. HW15685:hadoop sbanerjee$ ./hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/bin/oz genesis -h Error: Could not find or load main class org.apache.hadoop.ozone.genesis.Genesis HW15685:hadoop sbanerjee$ ./hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/bin/oz scm WARNING: /Users/sbanerjee/hadoop/hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/logs does not exist. Creating. Error: Could not find or load main class org.apache.hadoop.ozone.scm.StorageContainerManager > Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths > - > > Key: HDFS-13383 > URL: https://issues.apache.org/jira/browse/HDFS-13383 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13383-HDFS-7240.001.patch > > > start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. > Starting of datanode fails because of incomplete classpaths as datanode is > unable to load all the plugins. > Setting the class path to the following values does resolve the issue: > {code} > export > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/* > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13320) Ozone:Support for MicrobenchMarking Tool
[ https://issues.apache.org/jira/browse/HDFS-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423470#comment-16423470 ] Shashikant Banerjee commented on HDFS-13320: Thanks [~anu], for having a look at it. The issue does not seem to be relate with the patch. The hadoop_classpath is not getting properly set. I am not able to run KSM , SCM etc. HW15685:hadoop sbanerjee$ ./hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/bin/oz ksm WARNING: /Users/sbanerjee/hadoop/hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/logs does not exist. Creating. Error: Could not find or load main class org.apache.hadoop.ozone.ksm.KeySpaceManager HW15685:hadoop sbanerjee$ ./hadoop-dist/target/hadoop-3.2.0-SNAPSHOT/bin/oz scm Error: Could not find or load main class org.apache.hadoop.ozone.scm.StorageContainerManager This is related to HDFS-13383. > Ozone:Support for MicrobenchMarking Tool > > > Key: HDFS-13320 > URL: https://issues.apache.org/jira/browse/HDFS-13320 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13320-HDFS-7240.001.patch, > HDFS-13320-HDFS-7240.002.patch, HDFS-13320-HDFS-7240.003.patch > > > This Jira proposes to add a micro benchmarking tool called Genesis which > executes a set of HDSL/Ozone benchmarks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13320) Ozone:Support for MicrobenchMarking Tool
[ https://issues.apache.org/jira/browse/HDFS-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423454#comment-16423454 ] Anu Engineer commented on HDFS-13320: - [~shashikant] I am getting the following error when I try to execute this code. Can you please check? {noformat} ./hadoop-3.2.0-SNAPSHOT/bin/oz genesis -h Error: Could not find or load main class org.apache.hadoop.ozone.genesis.Genesis {noformat} > Ozone:Support for MicrobenchMarking Tool > > > Key: HDFS-13320 > URL: https://issues.apache.org/jira/browse/HDFS-13320 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13320-HDFS-7240.001.patch, > HDFS-13320-HDFS-7240.002.patch, HDFS-13320-HDFS-7240.003.patch > > > This Jira proposes to add a micro benchmarking tool called Genesis which > executes a set of HDSL/Ozone benchmarks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts
[ https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423449#comment-16423449 ] Xiao Chen commented on HDFS-13056: -- Thanks [~dennishuo] for the new rev! LGTM, pending the following minors: - {{DFSClient}} is an hdfs class, so we don't need {{InterfaceAudience.LimitedPrivate}} it for hdfs - Looking at the duplicate code in FileChecksumHelper, I think we can extract them to meaningful methods. Namely 1 method to setCrcType based on blockIdx, and another method to write the blockchecksumbuf based on checksumData and checksum type, returning a detailed debugging string. - we usually leave a blank line in javadoc, between the description and other fields (params, throws etc.). See existing code for examples. > Expose file-level composite CRCs in HDFS which are comparable across > different instances/layouts > > > Key: HDFS-13056 > URL: https://issues.apache.org/jira/browse/HDFS-13056 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, distcp, erasure-coding, federation, hdfs >Affects Versions: 3.0.0 >Reporter: Dennis Huo >Assignee: Dennis Huo >Priority: Major > Attachments: HDFS-13056-branch-2.8.001.patch, > HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, > HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, > HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, > HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, > HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, > HDFS-13056.008.patch, HDFS-13056.009.patch, HDFS-13056.010.patch, > HDFS-13056.011.patch, HDFS-13056.012.patch, HDFS-13056.013.patch, > Reference_only_zhen_PPOC_hadoop2.6.X.diff, hdfs-file-composite-crc32-v1.pdf, > hdfs-file-composite-crc32-v2.pdf, hdfs-file-composite-crc32-v3.pdf > > > FileChecksum was first introduced in > [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then > has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are > already stored as part of datanode metadata, and the MD5 approach is used to > compute an aggregate value in a distributed manner, with individual datanodes > computing the MD5-of-CRCs per-block in parallel, and the HDFS client > computing the second-level MD5. > > A shortcoming of this approach which is often brought up is the fact that > this FileChecksum is sensitive to the internal block-size and chunk-size > configuration, and thus different HDFS files with different block/chunk > settings cannot be compared. More commonly, one might have different HDFS > clusters which use different block sizes, in which case any data migration > won't be able to use the FileChecksum for distcp's rsync functionality or for > verifying end-to-end data integrity (on top of low-level data integrity > checks applied at data transfer time). > > This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 > during the addition of checksum support for striped erasure-coded files; > while there was some discussion of using CRC composability, it still > ultimately settled on hierarchical MD5 approach, which also adds the problem > that checksums of basic replicated files are not comparable to striped files. > > This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses > CRC composition to remain completely chunk/block agnostic, and allows > comparison between striped vs replicated files, between different HDFS > instances, and possible even between HDFS and other external storage systems. > This feature can also be added in-place to be compatible with existing block > metadata, and doesn't need to change the normal path of chunk verification, > so is minimally invasive. This also means even large preexisting HDFS > deployments could adopt this feature to retroactively sync data. A detailed > design document can be found here: > https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13386) RBF: wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13386: - Attachment: image-2018-04-03-11-59-51-623.png > RBF: wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Attachments: image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-52-57-764.png! > this is happening because getMountPointDates is not implemented > {code:java} > private MapgetMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
[ https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-13341: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~elek] Thanks for the contribution. I have committed to the feature branch. [~xyao] Thanks for the review. > Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework > -- > > Key: HDFS-13341 > URL: https://issues.apache.org/jira/browse/HDFS-13341 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13341-HDFS-7240.001.patch, > HDFS-13341-HDFS-7240.002.patch, HDFS-13341-HDFS-7240.003.patch > > > ServiceRuntimeInfo is a generic interface to provide common information via > JMX beans (such as build version, compile info, started time). > Currently it is used only by KSM/SCM, I suggest to move it to the > hadoop-hdsl/framework project from hadoop-commons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13386) RBF: wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13386: - Description: # hdfs dfs -ls !image-2018-04-03-11-59-51-623.png! this is happening because getMountPointDates is not implemented {code:java} private MapgetMountPointDates(String path) { Map ret = new TreeMap<>(); // TODO add when we have a Mount Table return ret; } {code} was: # hdfs dfs -ls !image-2018-04-03-11-52-57-764.png! this is happening because getMountPointDates is not implemented {code:java} private Map getMountPointDates(String path) { Map ret = new TreeMap<>(); // TODO add when we have a Mount Table return ret; } {code} > RBF: wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Attachments: image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13386) RBF: wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dibyendu Karmakar updated HDFS-13386: - Attachment: (was: image-2018-04-03-11-52-57-764.png) > RBF: wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > > # hdfs dfs -ls > !image-2018-04-03-11-52-57-764.png! > this is happening because getMountPointDates is not implemented > {code:java} > private MapgetMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13386) RBF: wrong date information in list file(-ls) result
Dibyendu Karmakar created HDFS-13386: Summary: RBF: wrong date information in list file(-ls) result Key: HDFS-13386 URL: https://issues.apache.org/jira/browse/HDFS-13386 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Dibyendu Karmakar Assignee: Dibyendu Karmakar # hdfs dfs -ls !image-2018-04-03-11-52-57-764.png! this is happening because getMountPointDates is not implemented {code:java} private MapgetMountPointDates(String path) { Map ret = new TreeMap<>(); // TODO add when we have a Mount Table return ret; } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13341) Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework
[ https://issues.apache.org/jira/browse/HDFS-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423437#comment-16423437 ] Anu Engineer commented on HDFS-13341: - +1, I will commit this shortly. > Ozone: Move ozone specific ServiceRuntimeInfo utility to the framework > -- > > Key: HDFS-13341 > URL: https://issues.apache.org/jira/browse/HDFS-13341 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Attachments: HDFS-13341-HDFS-7240.001.patch, > HDFS-13341-HDFS-7240.002.patch, HDFS-13341-HDFS-7240.003.patch > > > ServiceRuntimeInfo is a generic interface to provide common information via > JMX beans (such as build version, compile info, started time). > Currently it is used only by KSM/SCM, I suggest to move it to the > hadoop-hdsl/framework project from hadoop-commons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13385) Unknown compression method
jifei_yang created HDFS-13385: - Summary: Unknown compression method Key: HDFS-13385 URL: https://issues.apache.org/jira/browse/HDFS-13385 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs-client Affects Versions: 2.6.0 Environment: centos6.8+hadoop-2.6.0+spark-1.6.0 Reporter: jifei_yang Fix For: 2.6.0 {code:java} // java.io.IOException: unknown compression method at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method) at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:228) at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at java.io.InputStream.read(InputStream.java:101) at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180) at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216) at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:248) at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:48) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:246) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:148) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} Spark when reading the gz file in the directory (/user/admin/data/), an exception occurred when I read these files using GZIPInputStream in java.io, the display is normal, then do not know how Hadoop judges this GZ file Is it legal? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423410#comment-16423410 ] Íñigo Goiri commented on HDFS-13384: The proposal in [^HDFS-13384.000.patch] is to add an specific IOException for this case and catch the exception properly. > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13384.000.patch > > > When issuing RPC requests to subclusters, we have a time out mechanism > introduced in HDFS-12273. We need to improve this is handled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13384: --- Attachment: (was: HDFS-13384.000.patch) > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13384.000.patch > > > When issuing RPC requests to subclusters, we have a time out mechanism > introduced in HDFS-12273. We need to improve this is handled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13384: --- Attachment: HDFS-13384.000.patch > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13384.000.patch > > > When issuing RPC requests to subclusters, we have a time out mechanism > introduced in HDFS-12273. We need to improve this is handled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13384: --- Description: When issuing RPC requests to subclusters, we have a time out mechanism introduced in HDFS-12273. We need to improve this is handled. > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13384.000.patch > > > When issuing RPC requests to subclusters, we have a time out mechanism > introduced in HDFS-12273. We need to improve this is handled. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
[ https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423409#comment-16423409 ] genericqa commented on HDFS-13383: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 14s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s{color} | {color:red} The patch generated 4 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:7a542fb | | JIRA Issue | HDFS-13383 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917290/HDFS-13383-HDFS-7240.001.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 4c7d0d28a000 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / d0488c7 | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23755/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/23755/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 341 (vs. ulimit of 1) | | modules | C: hadoop-ozone/common U: hadoop-ozone/common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23755/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths > - > > Key: HDFS-13383 > URL: https://issues.apache.org/jira/browse/HDFS-13383 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13383-HDFS-7240.001.patch > > > start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. > Starting of datanode fails because of incomplete classpaths as datanode is > unable to load all the plugins. > Setting the class path to the following values does resolve the issue: > {code} > export >
[jira] [Updated] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13384: --- Attachment: HDFS-13384.000.patch > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13384.000.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13384) RBF: Improve timeout RPC call mechanism
[ https://issues.apache.org/jira/browse/HDFS-13384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri reassigned HDFS-13384: -- Assignee: Íñigo Goiri > RBF: Improve timeout RPC call mechanism > --- > > Key: HDFS-13384 > URL: https://issues.apache.org/jira/browse/HDFS-13384 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13384) RBF: Improve timeout RPC call mechanism
Íñigo Goiri created HDFS-13384: -- Summary: RBF: Improve timeout RPC call mechanism Key: HDFS-13384 URL: https://issues.apache.org/jira/browse/HDFS-13384 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Íñigo Goiri -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
[ https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13383: - Attachment: HDFS-13383-HDFS-7240.001.patch > Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths > - > > Key: HDFS-13383 > URL: https://issues.apache.org/jira/browse/HDFS-13383 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13383-HDFS-7240.001.patch > > > start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. > Starting of datanode fails because of incomplete classpaths as datanode is > unable to load all the plugins. > Setting the class path to the following values does resolve the issue: > {code} > export > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/* > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
[ https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13383: - Status: Patch Available (was: Open) > Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths > - > > Key: HDFS-13383 > URL: https://issues.apache.org/jira/browse/HDFS-13383 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13383-HDFS-7240.001.patch > > > start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. > Starting of datanode fails because of incomplete classpaths as datanode is > unable to load all the plugins. > Setting the class path to the following values does resolve the issue: > {code} > export > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/* > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423362#comment-16423362 ] genericqa commented on HDFS-13350: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 56s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 57s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}172m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestFileCorruption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917271/HDFS-13350.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ed6a29af21db 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c78cb18 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/23754/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23754/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423355#comment-16423355 ] genericqa commented on HDFS-13350: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917267/HDFS-13350.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux be5639bebcba 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c78cb18 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23753/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23753/testReport/ | | Max. process+thread count | 2923 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-13351) Revert HDFS-11156 from branch-2/branch-2.8
[ https://issues.apache.org/jira/browse/HDFS-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423286#comment-16423286 ] Weiwei Yang commented on HDFS-13351: This task is blocked since branch-2 jenkins build is broken. > Revert HDFS-11156 from branch-2/branch-2.8 > -- > > Key: HDFS-13351 > URL: https://issues.apache.org/jira/browse/HDFS-13351 > Project: Hadoop HDFS > Issue Type: Task > Components: webhdfs >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Major > Attachments: HDFS-13351-branch-2.001.patch, > HDFS-13351-branch-2.002.patch > > > Per discussion in HDFS-11156, lets revert the change from branch-2 and > branch-2.8. New patch can be tracked in HDFS-12459 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423234#comment-16423234 ] Lei (Eddy) Xu commented on HDFS-13350: -- Thanks a lot for the reviews, [~xiaochen] and [~ajayydv]. bq. FSEditLogLoader has 1 BlockIdManager.isStripedBlockID call too. This one is safe because this edit op {{OP_ALLOCATE_BLOCK_ID}} was introduced in the sequential id generator, as seen from the comment: {code} // ALLOCATE_BLOCK_ID is added for sequential block id, thus if the id // is negative, it must belong to striped blocks {code} bq. There are a few BlockIdManager.isStripedBlockID calls in BlockManager. These were the fixes from HDFS-7994. So it should work as described. Shall we use another JIRA to just fix them for code style consistency? bq. Shall we make new function BlockIdManager#isStripedBlock public as this may be utilized outside default package Done. bq. We can prevent innocuous use of isStripedBlockID by making it private. Thanks for the suggestions, but I just realized that it is used by {{FSEditLogLoader}}, so it still needs to be public at this time. bq. Typo in CorruptReplicasMap L209 Done. > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: (was: HDFS-13350.01.patch) > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: HDFS-13350.01.patch > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423219#comment-16423219 ] genericqa commented on HDFS-13331: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 34s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 9s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 16s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 6s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 9s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 17s{color} | {color:green} root: The patch generated 0 new + 363 unchanged - 1 fixed = 363 total (was 364) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}212m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13331 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917232/HDFS-13331-HDFS-12943.004.patch | | Optional Tests | asflicense compile
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: HDFS-13350.01.patch > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch, HDFS-13350.01.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423165#comment-16423165 ] Ajay Kumar commented on HDFS-13248: --- [~elgoiri] will try to submit a poc patch this week. > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422986#comment-16422986 ] Plamen Jeliazkov commented on HDFS-13331: - On second thought I decided to implement (1) anyway and make use of {{LogCapturer}}. My justification is that we can address any concerns in the follow-up work anyway; and it's possible we keep the log statement but maybe just change its structure once further server implementations are done. I have attached a new patch addressing your points. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331-HDFS-12943.004.patch, > HDFS-13331.trunk.001.patch, HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Plamen Jeliazkov updated HDFS-13331: Attachment: HDFS-13331-HDFS-12943.004.patch > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331-HDFS-12943.004.patch, > HDFS-13331.trunk.001.patch, HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422952#comment-16422952 ] James Clampffer commented on HDFS-13376: Thanks for finding the right version of GCC to document [~GeLiXin]! For completeness could you also update the error message in native/libhdfspp/CMakeLists.txt where the check is done? It says what versions of Clang work but should most likely say GCC needs to be >= 4.8.1 to help out others who run into this. > TLS support error in Native Build of hadoop-hdfs-native-client > -- > > Key: HDFS-13376 > URL: https://issues.apache.org/jira/browse/HDFS-13376 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, documentation, native >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13376.001.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message): > [exec] FATAL ERROR: The required feature thread_local storage is not > supported by > [exec] your compiler. Known compilers that support this feature: GCC, > Visual > [exec] Studio, Clang (community version), Clang (version for iOS 9 and > later). > [exec] > [exec] > [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed > [exec] -- Configuring incomplete, errors occurred! > {noformat} > My environment: > Linux: Red Hat 4.4.7-3 > cmake: 3.8.2 > java: 1.8.0_131 > gcc: 4.4.7 > maven: 3.5.0 > Seems this is because the low version of gcc, will report after confirming > it. > Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc > version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422883#comment-16422883 ] Plamen Jeliazkov edited comment on HDFS-13331 at 4/2/18 6:00 PM: - Thanks [~xkrogen] -- I debated whether to do the static field work now or in follow-up for a while and I think in follow-up will help make my patch more focused. I think I am able to do it by passing it through {{Call}} object though. As for your points, I agree with them except that for (1) I do not intend the log statement to last long and therefore would opt not to rely on a log capture. Log capture tests are a little hacky as well. If you would really like me to remove the {{@Ignored}} however I can just remove it for now and update the test later. I should have a patch by EOD today. was (Author: zero45): Thanks [~xkrogen] -- I debated whether to do the static field work now or in follow-up for a while and I think in follow-up will help make my patch more focused. I think I am able to do it by passing it through {{Call}} object though. As for your points, I agree with them except that for (1) I do not intend the log statement to last long and therefore would opt not to rely on a log capture. Log capture tests are a little hacky as well. If you would really like me to remove the {{@Ignored}} however I can just remove it for now and update the test later. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422883#comment-16422883 ] Plamen Jeliazkov commented on HDFS-13331: - Thanks [~xkrogen] -- I debated whether to do the static field work now or in follow-up for a while and I think in follow-up will help make my patch more focused. I think I am able to do it by passing it through {{Call}} object though. As for your points, I agree with them except that for (1) I do not intend the log statement to last long and therefore would opt not to rely on a log capture. Log capture tests are a little hacky as well. If you would really like me to remove the {{@Ignored}} however I can just remove it for now and update the test later. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422878#comment-16422878 ] Ajay Kumar commented on HDFS-13350: --- [~eddyxu] thanks for working on this. In addition to [~xiaochen]'s comments i have few suggestions: * Shall we make new function \{{BlockIdManager#isStripedBlock}} public as this may be utilized outside default package * We can prevent innocuous use of \{{isStripedBlockID}} by making it private. * Typo in CorruptReplicasMap L209 > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422850#comment-16422850 ] genericqa commented on HDFS-13364: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 49s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13364 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917205/HDFS-13364.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e651c85726e6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 54a8121 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23751/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23751/testReport/ | | Max. process+thread count | 940 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23751/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT
[jira] [Comment Edited] (HDFS-13358) RBF: Support for Delegation Token
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422830#comment-16422830 ] Sherwood Zheng edited comment on HDFS-13358 at 4/2/18 5:37 PM: --- [~daryn] Sorry for the delay, it was my first-time oncall last week so I was focusing on my oncall stuff. The reason to have two disjoint subtasks is simply that I want to split the work for KT and DT. I was trying to have a joint ticket on top of these two tasks, but it seems like Jira doesn't allow creating subtasks for one subtask. I am still working on the design doc, and will post it after I finish it and do a simple review within the team. was (Author: zhengxg3): [~daryn] Sorry for the delay, it was my first-time oncall last week. The reason to have two disjoint subtasks is simply because I want to split the work for KT and DT. I was trying to have a joint ticket on top of these two tasks, but it seems like Jira doesn't allow creating subtasks for one subtask. I am still working on the design doc, and will post it after I finish it and do a simple review within the team. > RBF: Support for Delegation Token > - > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422830#comment-16422830 ] Sherwood Zheng commented on HDFS-13358: --- [~daryn] Sorry for the delay, it was my first-time oncall last week. The reason to have two disjoint subtasks is simply because I want to split the work for KT and DT. I was trying to have a joint ticket on top of these two tasks, but it seems like Jira doesn't allow creating subtasks for one subtask. I am still working on the design doc, and will post it after I finish it and do a simple review within the team. > RBF: Support for Delegation Token > - > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422806#comment-16422806 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917194/HDFS-13381-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 8cc063490fd1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / e24bb54 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23750/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23750/testReport/ | | Max. process+thread count | 3324 (vs. ulimit of 1) | |
[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422764#comment-16422764 ] Íñigo Goiri commented on HDFS-13311: [~chris.douglas], should I move this to HADOOP? > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422762#comment-16422762 ] Lukas Majercak commented on HDFS-13311: --- lgtm > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422756#comment-16422756 ] Íñigo Goiri commented on HDFS-13311: I attached [^HDFS-13311.000.patch] with the fix. Technically, it's a fix in commons but not sure is worth moving the issue there. At the beginning I had an if block for the whole thing but the patch was messy; not a big fan of using continue but this is the cleanest. Thoughts? > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13311: --- Attachment: HDFS-13311.000.patch > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
[ https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422741#comment-16422741 ] genericqa commented on HDFS-13325: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 25s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 34s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 35s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 31s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/acceptance-test hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 32s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} objectstore-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 40s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 36s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 31s{color} | {color:red} objectstore-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 34s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | |
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422710#comment-16422710 ] Íñigo Goiri commented on HDFS-13365: [~linyiqun], those comments are pretty much a copy from {{Datanode#checkSuperuserPrivilege()}}; I think they don't hurt. > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, > HDFS-13365.003.patch, HDFS-13365.004.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422708#comment-16422708 ] Íñigo Goiri commented on HDFS-13364: Thanks [~linyiqun] for the comments. Yes, wrong patch... the problem of doing two patches at the same time :) I think [^HDFS-13364.004.patch] tackles or your comments. > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch, HDFS-13364.004.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13364: --- Attachment: HDFS-13364.004.patch > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch, HDFS-13364.004.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13364: --- Attachment: (was: HDFS-13365.004.patch) > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13337) Backport HDFS-4275 to branch-2.9
[ https://issues.apache.org/jira/browse/HDFS-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422689#comment-16422689 ] Íñigo Goiri commented on HDFS-13337: [~surmountian], can you confirm? > Backport HDFS-4275 to branch-2.9 > > > Key: HDFS-13337 > URL: https://issues.apache.org/jira/browse/HDFS-13337 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Minor > Attachments: HDFS-13337-branch-2.000.patch > > > Multiple HDFS test suites fail on Windows during initialization of > MiniDFSCluster due to "Could not fully delete" the name testing data > directory. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
[ https://issues.apache.org/jira/browse/HDFS-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reassigned HDFS-13383: Assignee: Mukul Kumar Singh > Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths > - > > Key: HDFS-13383 > URL: https://issues.apache.org/jira/browse/HDFS-13383 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > > start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. > Starting of datanode fails because of incomplete classpaths as datanode is > unable to load all the plugins. > Setting the class path to the following values does resolve the issue: > {code} > export > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/* > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart
[ https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422668#comment-16422668 ] He Xiaoqiao commented on HDFS-12749: [~kihwal] Thanks for your feedback. I misunderstood your idea above. It is indeed to fix in the catch block of {{processCommand()}}, I will solve this issue in couple of days. > DN may not send block report to NN after NN restart > --- > > Key: HDFS-12749 > URL: https://issues.apache.org/jira/browse/HDFS-12749 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1 >Reporter: TanYuxin >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-12749-branch-2.7.002.patch, > HDFS-12749-trunk.003.patch, HDFS-12749.001.patch > > > Now our cluster have thousands of DN, millions of files and blocks. When NN > restart, NN's load is very high. > After NN restart,DN will call BPServiceActor#reRegister method to register. > But register RPC will get a IOException since NN is busy dealing with Block > Report. The exception is caught at BPServiceActor#processCommand. > Next is the caught IOException: > {code:java} > WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing > datanode Command > java.io.IOException: Failed on local exception: java.io.IOException: > java.net.SocketTimeoutException: 6 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local > host is: "DataNode_Host/Datanode_IP"; destination host is: > "NameNode_Host":Port; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773) > at org.apache.hadoop.ipc.Client.call(Client.java:1474) > at org.apache.hadoop.ipc.Client.call(Client.java:1407) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) > at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926) > at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864) > at java.lang.Thread.run(Thread.java:745) > {code} > The un-catched IOException breaks BPServiceActor#register, and the Block > Report can not be sent immediately. > {code} > /** >* Register one bp with the corresponding NameNode >* >* The bpDatanode needs to register with the namenode on startup in order >* 1) to report which storage it is serving now and >* 2) to receive a registrationID >* >* issued by the namenode to recognize registered datanodes. >* >* @param nsInfo current NamespaceInfo >* @see FSNamesystem#registerDatanode(DatanodeRegistration) >* @throws IOException >*/ > void register(NamespaceInfo nsInfo) throws IOException { > // The handshake() phase loaded the block pool storage > // off disk - so update the bpRegistration object from that info > DatanodeRegistration newBpRegistration = bpos.createRegistration(); > LOG.info(this + " beginning handshake with NN"); > while (shouldRun()) { > try { > // Use returned registration from namenode with updated fields > newBpRegistration = bpNamenode.registerDatanode(newBpRegistration); > newBpRegistration.setNamespaceInfo(nsInfo); > bpRegistration = newBpRegistration; > break; > } catch(EOFException e) { // namenode might have just restarted > LOG.info("Problem connecting to server: " + nnAddr + " :" > + e.getLocalizedMessage()); > sleepAndLogInterrupts(1000, "connecting to server"); > } catch(SocketTimeoutException e) { // namenode is busy > LOG.info("Problem connecting to server: " + nnAddr); > sleepAndLogInterrupts(1000, "connecting to server"); > } > } > > LOG.info("Block pool " + this + " successfully registered with NN"); > bpos.registrationSucceeded(this, bpRegistration); > // random short delay - helps scatter the BR from all DNs >
[jira] [Created] (HDFS-13383) Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths
Mukul Kumar Singh created HDFS-13383: Summary: Ozone: start-ozone.sh fail to start datanode because of incomplete classpaths Key: HDFS-13383 URL: https://issues.apache.org/jira/browse/HDFS-13383 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Fix For: HDFS-7240 start-ozone.sh calls start-dfs.sh to start the NN and DN in a ozone cluster. Starting of datanode fails because of incomplete classpaths as datanode is unable to load all the plugins. Setting the class path to the following values does resolve the issue: {code} export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/ozone/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/hdsl/lib/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/*:/opt/hadoop/hadoop-3.2.0-SNAPSHOT/share/hadoop/cblock/lib/* {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10183) Prevent race condition during class initialization
[ https://issues.apache.org/jira/browse/HDFS-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422558#comment-16422558 ] genericqa commented on HDFS-10183: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-10183 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12794395/HDFS-10183.2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d394da0fcd97 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dc8e343 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23746/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23746/testReport/ | | Max. process+thread count | 2848 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23746/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT
[jira] [Updated] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-13381: Attachment: HDFS-13381-HDFS-10285-01.patch > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch, > HDFS-13381-HDFS-10285-01.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13348) Ozone: Update IP and hostname in Datanode from SCM's response to the register call
[ https://issues.apache.org/jira/browse/HDFS-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422530#comment-16422530 ] Nanda kumar commented on HDFS-13348: Thanks [~shashikant] for working on this. Please find the review comments below. *StorageContainerDatanodeProtocol.proto* nitpick: hostName can be rename to hostname. *RegisteredCommand.java* nitpick: hostName can be renamed to hostname. Line 190 & 195: There is no need for null check here. If setHostName & setIpAddress methods are not called the value of hostName & ipAddress will be null. *RegisterEndpointTask.java* Not related to the change done in this jira: Line:90 The error message is wrong. *SCMNodeManager.java* Line:803 & 804 hostname and ip are not always set. When {{Server.getRemoteIp()}} returns null (i.e. if the method is not called inside an RPC) the value of hostname and ip will be null. This case has to be handled. > Ozone: Update IP and hostname in Datanode from SCM's response to the register > call > -- > > Key: HDFS-13348 > URL: https://issues.apache.org/jira/browse/HDFS-13348 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13348-HDFS-7240.000.patch > > > Whenever a Datanode registers with SCM, the SCM resolves the IP address and > hostname of the Datanode form the RPC call. This IP address and hostname > should be sent back to Datanode in the response to register call and the > Datanode has to update the values from the response to its > {{DatanodeDetails}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422273#comment-16422273 ] genericqa commented on HDFS-13301: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 50s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:7a542fb | | JIRA Issue | HDFS-13301 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917189/HDFS-13301-HDFS-7240.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 30e472d086e8 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / d0488c7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23749/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
[jira] [Commented] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
[ https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422234#comment-16422234 ] Shashikant Banerjee commented on HDFS-13324: patch v1 depends on HDFS-13301 . Not submitting it for now. > Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails > -- > > Key: HDFS-13324 > URL: https://issues.apache.org/jira/browse/HDFS-13324 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13324-HDFS-7240.000.patch > > > We have removed the dependency of DatanodeID in HDSL/Ozone and there is no > need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and > InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
[ https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13324: --- Attachment: HDFS-13324-HDFS-7240.000.patch > Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails > -- > > Key: HDFS-13324 > URL: https://issues.apache.org/jira/browse/HDFS-13324 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13324-HDFS-7240.000.patch > > > We have removed the dependency of DatanodeID in HDSL/Ozone and there is no > need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and > InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422214#comment-16422214 ] Shashikant Banerjee commented on HDFS-13301: Patch v1 removes the HDSL/Ozone related changes from hdfs proto and DatanodeID. > Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and > DatanodeIDProto > - > > Key: HDFS-13301 > URL: https://issues.apache.org/jira/browse/HDFS-13301 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13301-HDFS-7240.000.patch > > > HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove > {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} > and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related > fields from {{DatanodeID}} and {{DatanodeIDProto}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13301: --- Status: Patch Available (was: Open) > Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and > DatanodeIDProto > - > > Key: HDFS-13301 > URL: https://issues.apache.org/jira/browse/HDFS-13301 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13301-HDFS-7240.000.patch > > > HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove > {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} > and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related > fields from {{DatanodeID}} and {{DatanodeIDProto}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422216#comment-16422216 ] genericqa commented on HDFS-13381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 19s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HDFS-13381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917184/HDFS-13381-HDFS-10285-00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 86662f15f517 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / e24bb54 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23747/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile |
[jira] [Updated] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13301: --- Attachment: HDFS-13301-HDFS-7240.000.patch > Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and > DatanodeIDProto > - > > Key: HDFS-13301 > URL: https://issues.apache.org/jira/browse/HDFS-13301 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13301-HDFS-7240.000.patch > > > HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove > {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} > and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related > fields from {{DatanodeID}} and {{DatanodeIDProto}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container
[ https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422211#comment-16422211 ] genericqa commented on HDFS-12794: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 6s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} objectstore-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} objectstore-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s{color} | {color:red} client in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s{color} | {color:red} objectstore-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} |
[jira] [Commented] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
[ https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422202#comment-16422202 ] Shashikant Banerjee commented on HDFS-13325: patch v1 renames ObjectStoreRestPlugin to OzoneDatanodeService and removes DataNodeServicePlugin. > Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService > --- > > Key: HDFS-13325 > URL: https://issues.apache.org/jira/browse/HDFS-13325 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13325-HDFS-7240.000.patch > > > Based on this comment, we can rename {{ObjectStoreRestPlugin}} to > {{OzoneDatanodeService}}. So that the plugin name will be consistant with > {{HdslDatanodeService}}. We can also remove {{DataNodeServicePlugin}} and > directly use {{ServicePlugin}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
[ https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13325: --- Status: Patch Available (was: Open) > Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService > --- > > Key: HDFS-13325 > URL: https://issues.apache.org/jira/browse/HDFS-13325 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13325-HDFS-7240.000.patch > > > Based on this comment, we can rename {{ObjectStoreRestPlugin}} to > {{OzoneDatanodeService}}. So that the plugin name will be consistant with > {{HdslDatanodeService}}. We can also remove {{DataNodeServicePlugin}} and > directly use {{ServicePlugin}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
[ https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13325: --- Attachment: HDFS-13325-HDFS-7240.000.patch > Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService > --- > > Key: HDFS-13325 > URL: https://issues.apache.org/jira/browse/HDFS-13325 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Attachments: HDFS-13325-HDFS-7240.000.patch > > > Based on this comment, we can rename {{ObjectStoreRestPlugin}} to > {{OzoneDatanodeService}}. So that the plugin name will be consistant with > {{HdslDatanodeService}}. We can also remove {{DataNodeServicePlugin}} and > directly use {{ServicePlugin}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13382) Allocator does not initialize lotSize during hdfs mover process
[ https://issues.apache.org/jira/browse/HDFS-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingxin Wu updated HDFS-13382: -- Attachment: HDFS-13382.001.patch > Allocator does not initialize lotSize during hdfs mover process > --- > > Key: HDFS-13382 > URL: https://issues.apache.org/jira/browse/HDFS-13382 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Affects Versions: 2.7.5 >Reporter: Qingxin Wu >Priority: Major > Attachments: HDFS-13382.001.patch > > > Currently, when we execute > {code:java} > hdfs mover -p /some/path > {code} > the moverThreadAllocator in org.apache.hadoop.hdfs.server.balancer.Dispatcher > does not initialize lotSize according to _dfs.mover.moverThreads and > dfs.datanode.balance.max.concurrent.moves._ > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13382) Allocator does not initialize lotSize during hdfs mover process
Qingxin Wu created HDFS-13382: - Summary: Allocator does not initialize lotSize during hdfs mover process Key: HDFS-13382 URL: https://issues.apache.org/jira/browse/HDFS-13382 Project: Hadoop HDFS Issue Type: Bug Components: balancer mover Affects Versions: 2.7.5 Reporter: Qingxin Wu Currently, when we execute {code:java} hdfs mover -p /some/path {code} the moverThreadAllocator in org.apache.hadoop.hdfs.server.balancer.Dispatcher does not initialize lotSize according to _dfs.mover.moverThreads and dfs.datanode.balance.max.concurrent.moves._ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-13381: Attachment: HDFS-13381-HDFS-10285-00.patch > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
[ https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-13381: Status: Patch Available (was: Open) > [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path > > > Key: HDFS-13381 > URL: https://issues.apache.org/jira/browse/HDFS-13381 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R >Priority: Major > Attachments: HDFS-13381-HDFS-10285-00.patch > > > This Jira task will address the following comments: > # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string > path and another for inodeId) like today. > # Only the context impl differs for external/internal sps. Here, it can > simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13381) [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
Rakesh R created HDFS-13381: --- Summary: [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path Key: HDFS-13381 URL: https://issues.apache.org/jira/browse/HDFS-13381 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Rakesh R Assignee: Rakesh R This Jira task will address the following comments: # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string path and another for inodeId) like today. # Only the context impl differs for external/internal sps. Here, it can simply move FileCollector and BlockMoveTaskHandler to Context interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10183) Prevent race condition during class initialization
[ https://issues.apache.org/jira/browse/HDFS-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-10183: - Fix Version/s: (was: 2.9.1) > Prevent race condition during class initialization > -- > > Key: HDFS-10183 > URL: https://issues.apache.org/jira/browse/HDFS-10183 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.9.0 >Reporter: Pavel Avgustinov >Assignee: Pavel Avgustinov >Priority: Minor > Attachments: HADOOP-12944.1.patch, HDFS-10183.2.patch > > > In HADOOP-11969, [~busbey] tracked down a non-deterministic > {{NullPointerException}} to an oddity in the Java memory model: When multiple > threads trigger the loading of a class at the same time, one of them wins and > creates the {{java.lang.Class}} instance; the others block during this > initialization, but once it is complete they may obtain a reference to the > {{Class}} which has non-{{final}} fields still containing their default (i.e. > {{null}}) values. This leads to runtime failures that are hard to debug or > diagnose. > HADOOP-11969 observed that {{ThreadLocal}} fields, by their very nature, are > very likely to be accessed from multiple threads, and thus the problem is > particularly severe there. Consequently, the patch removed all occurrences of > the issue in the code base. > Unfortunately, since then HDFS-7964 has [reverted one of the fixes during a > refactoring|https://github.com/apache/hadoop/commit/2151716832ad14932dd65b1a4e47e64d8d6cd767#diff-0c2e9f7f9e685f38d1a11373b627cfa6R151], > and introduced a [new instance of the > problem|https://github.com/apache/hadoop/commit/2151716832ad14932dd65b1a4e47e64d8d6cd767#diff-6334d0df7d9aefbccd12b21bb7603169R43]. > The attached patch addresses the issue by adding the missing {{final}} > modifier in these two cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13337) Backport HDFS-4275 to branch-2.9
[ https://issues.apache.org/jira/browse/HDFS-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-13337: - Target Version/s: 2.10.0, 2.9.2 (was: 2.10.0, 2.9.1) > Backport HDFS-4275 to branch-2.9 > > > Key: HDFS-13337 > URL: https://issues.apache.org/jira/browse/HDFS-13337 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Íñigo Goiri >Assignee: Xiao Liang >Priority: Minor > Attachments: HDFS-13337-branch-2.000.patch > > > Multiple HDFS test suites fail on Windows during initialization of > MiniDFSCluster due to "Could not fully delete" the name testing data > directory. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache
[ https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-11885: - Target Version/s: 2.8.3, 3.2.0, 2.9.2 (was: 2.8.3, 2.9.1, 3.2.0) > createEncryptionZone should not block on initializing EDEK cache > > > Key: HDFS-11885 > URL: https://issues.apache.org/jira/browse/HDFS-11885 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Major > Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, > HDFS-11885.003.patch, HDFS-11885.004.patch > > > When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which > calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, > which attempts to fill the key cache up to the low watermark. > If the KMS is down or slow, this can take a very long time, and cause the > createZone RPC to fail with a timeout. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12257: - Target Version/s: 2.8.3, 3.2.0, 2.9.2 (was: 2.8.3, 2.9.1, 3.2.0) > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang >Priority: Major > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, > HDFS-12257.003.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13051) dead lock occurs when rolleditlog rpc call happen and editPendingQ is full
[ https://issues.apache.org/jira/browse/HDFS-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-13051: - Target Version/s: 2.10.0, 2.8.4, 2.7.6, 3.0.2, 2.9.2 (was: 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2) > dead lock occurs when rolleditlog rpc call happen and editPendingQ is full > -- > > Key: HDFS-13051 > URL: https://issues.apache.org/jira/browse/HDFS-13051 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.5 >Reporter: zhangwei >Assignee: Daryn Sharp >Priority: Major > Labels: AsyncEditlog, deadlock > Attachments: HDFS-13112.patch, deadlock.patch > > > when doing rolleditlog it acquires fs write lock,then acquire FSEditLogAsync > lock object,and write 3 EDIT(the second one override logEdit method and > return true) > in extremely case,when FSEditLogAsync's logSync is very > slow,editPendingQ(default size 4096)is full,it case IPC thread can not offer > edit object into editPendingQ when doing rolleditlog,it block on editPendingQ > .put method,however it does't release FSEditLogAsync object lock, and > edit.logEdit method in FSEditLogAsync.run thread can never acquire > FSEditLogAsync object lock, it case dead lock > stack trace like below > "Thread[Thread-44528,5,main]" #130093 daemon prio=5 os_prio=0 > tid=0x02377000 nid=0x13fda waiting on condition [0x7fb3297de000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x7fbd3cb96f58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.enqueueEdit(FSEditLogAsync.java:156) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.logEdit(FSEditLogAsync.java:118) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.logCancelDelegationToken(FSEditLog.java:1008) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.logExpireDelegationToken(FSNamesystem.java:7635) > at > org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.logExpireToken(DelegationTokenSecretManager.java:395) > - locked <0x7fbd3cbae500> (a java.lang.Object) > at > org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.logExpireToken(DelegationTokenSecretManager.java:62) > at > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredToken(AbstractDelegationTokenSecretManager.java:604) > at > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.access$400(AbstractDelegationTokenSecretManager.java:54) > at > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:656) > at java.lang.Thread.run(Thread.java:745) > "FSEditLogAsync" #130072 daemon prio=5 os_prio=0 tid=0x0715b800 > nid=0x13fbf waiting for monitor entry [0x7fb32c51a000] > java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:443) > - waiting to lock <*0x7fbcbc131000*> (a > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:233) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:177) > at java.lang.Thread.run(Thread.java:745) > "IPC Server handler 47 on 53310" #337 daemon prio=5 os_prio=0 > tid=0x7fe659d46000 nid=0x4c62 waiting on condition [0x7fb32fe52000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x7fbd3cb96f58> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.enqueueEdit(FSEditLogAsync.java:156) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.logEdit(FSEditLogAsync.java:118) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1251) > - locked <*0x7fbcbc131000*> (a >
[jira] [Updated] (HDFS-13174) hdfs mover -p /path times out after 20 min
[ https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-13174: - Target Version/s: 3.0.1, 2.8.4, 2.7.6, 2.9.2 (was: 2.9.1, 3.0.1, 2.8.4, 2.7.6) > hdfs mover -p /path times out after 20 min > -- > > Key: HDFS-13174 > URL: https://issues.apache.org/jira/browse/HDFS-13174 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2 >Reporter: Istvan Fajth >Assignee: Istvan Fajth >Priority: Major > > In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source > class, that is checked during dispatching the moves that the Balancer and the > Mover does. This timeout is hardwired to 20 minutes. > In the Balancer we have iterations, and even if an iteration is timing out > the Balancer runs further and does an other iteration before it fails if > there were no moves happened in a few iterations. > The Mover on the other hand does not have iterations, so if moving a path > runs for more than 20 minutes, after 20 minutes Mover will stop with the > following exception reported to the console (lines might differ as this > exception came from a CDH5.12.1 installation): > java.io.IOException: Block move timed out > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186) > at > org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.
[ https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422102#comment-16422102 ] Rakesh R edited comment on HDFS-13328 at 4/2/18 10:29 AM: -- Thanks [~surendrasingh] for the patch. Apart from the below minor comments, I'm +1 for the patch. # Please make the class private. {code:java} private class ReencryptionPendingInodeIdCollector{code} # Please make it final {code:java} private final ReencryptionHandler reencryptionHandler; private final ReencryptionPendingInodeIdCollector traverser; private final FSDirectory dir; {code} was (Author: rakeshr): Thanks [~surendrasingh] for the patch. Apart from the below minor comments, I'm +1 for the patch. # Please make the class private. {code:java} private class ReencryptionPendingInodeIdCollector{code} # Please make it final {code:java} private final ReencryptionHandler reencryptionHandler; private final ReencryptionPendingInodeIdCollector traverser; private final FSDirectory dir; {code} > Abstract ReencryptionHandler recursive logic in separate class. > --- > > Key: HDFS-13328 > URL: https://issues.apache.org/jira/browse/HDFS-13328 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-13328-01.patch > > > HDFS-10899 added DFS logic to scan a directory. It is good to abstract this > logic in separate class, so it can be used in some other feature like > SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same > can be pushed in trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13328) Abstract ReencryptionHandler recursive logic in separate class.
[ https://issues.apache.org/jira/browse/HDFS-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422102#comment-16422102 ] Rakesh R commented on HDFS-13328: - Thanks [~surendrasingh] for the patch. Apart from the below minor comments, I'm +1 for the patch. # Please make the class private. {code:java} private class ReencryptionPendingInodeIdCollector{code} # Please make it final {code:java} private final ReencryptionHandler reencryptionHandler; private final ReencryptionPendingInodeIdCollector traverser; private final FSDirectory dir; {code} > Abstract ReencryptionHandler recursive logic in separate class. > --- > > Key: HDFS-13328 > URL: https://issues.apache.org/jira/browse/HDFS-13328 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-13328-01.patch > > > HDFS-10899 added DFS logic to scan a directory. It is good to abstract this > logic in separate class, so it can be used in some other feature like > SPS(HDFS-10285). I already tried abstracting DFS logic in HDFS-12291 and same > can be pushed in trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13301) Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and DatanodeIDProto
[ https://issues.apache.org/jira/browse/HDFS-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-13301: -- Assignee: Shashikant Banerjee > Remove containerPort, ratisPort and ozoneRestPort from DatanodeID and > DatanodeIDProto > - > > Key: HDFS-13301 > URL: https://issues.apache.org/jira/browse/HDFS-13301 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > > HDFS-13300 decouples DatanodeID from HDSL/Ozone, it's now safe to remove > {{containerPort}}, {{ratisPort}} and {{ozoneRestPort}} from {{DatanodeID}} > and {{DatanodeIDProto}}. This jira is to track the removal of Ozone related > fields from {{DatanodeID}} and {{DatanodeIDProto}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
[ https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422083#comment-16422083 ] Yiqun Lin edited comment on HDFS-13380 at 4/2/18 10:02 AM: --- Thanks for reporting this, [~wuweiwei]! Actually now we will do the quota verification for all WRITE type operations. We can make this be fine-grained controlled. was (Author: linyiqun): Thanks for reporting this, [~wuweiwei]! Actually now we will do the quota verification on for all WRITE type operation. We can make this be fine-grained controlled. > RBF: mv/rm fail after the directory exceeded the quota limit > > > Key: HDFS-13380 > URL: https://issues.apache.org/jira/browse/HDFS-13380 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > It's always fail when I try to mv/rm a directory which have exceeded the > quota limit. > {code:java} > [hadp@hadoop]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] > [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: > The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > [hadp@hadoop]$ hdfs dfs -rm -skipTrash > hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > {code} > I think we should add a parameter for the method *getLocationsForPath,* to > determine if we need to perform quota verification on the operation. For > example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
[ https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-13324: -- Assignee: Shashikant Banerjee > Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails > -- > > Key: HDFS-13324 > URL: https://issues.apache.org/jira/browse/HDFS-13324 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > > We have removed the dependency of DatanodeID in HDSL/Ozone and there is no > need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and > InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
[ https://issues.apache.org/jira/browse/HDFS-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422083#comment-16422083 ] Yiqun Lin commented on HDFS-13380: -- Thanks for reporting this, [~wuweiwei]! Actually now we will do the quota verification on for all WRITE type operation. We can make this be fine-grained controlled. > RBF: mv/rm fail after the directory exceeded the quota limit > > > Key: HDFS-13380 > URL: https://issues.apache.org/jira/browse/HDFS-13380 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Priority: Major > > It's always fail when I try to mv/rm a directory which have exceeded the > quota limit. > {code:java} > [hadp@hadoop]$ hdfs dfsrouteradmin -ls > Mount Table Entries: > Source Destinations Owner Group Mode Quota/Usage > /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] > [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: > The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > [hadp@hadoop]$ hdfs dfs -rm -skipTrash > hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 > rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file > count=1201 > {code} > I think we should add a parameter for the method *getLocationsForPath,* to > determine if we need to perform quota verification on the operation. For > example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
[ https://issues.apache.org/jira/browse/HDFS-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-13325: -- Assignee: Shashikant Banerjee > Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService > --- > > Key: HDFS-13325 > URL: https://issues.apache.org/jira/browse/HDFS-13325 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > > Based on this comment, we can rename {{ObjectStoreRestPlugin}} to > {{OzoneDatanodeService}}. So that the plugin name will be consistant with > {{HdslDatanodeService}}. We can also remove {{DataNodeServicePlugin}} and > directly use {{ServicePlugin}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container
[ https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422070#comment-16422070 ] Shashikant Banerjee commented on HDFS-12794: Rebased to the latest repo in patch v12. > Ozone: Parallelize ChunkOutputSream Writes to container > --- > > Key: HDFS-12794 > URL: https://issues.apache.org/jira/browse/HDFS-12794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12794-HDFS-7240.001.patch, > HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, > HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, > HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, > HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch, > HDFS-12794-HDFS-7240.010.patch, HDFS-12794-HDFS-7240.011.patch, > HDFS-12794-HDFS-7240.012.patch > > > The chunkOutPutStream Write are sync in nature .Once one chunk of data gets > written, the next chunk write is blocked until the previous chunk is written > to the container. > The ChunkOutputWrite Stream writes should be made async and Close on the > OutputStream should ensure flushing of all dirty buffers to the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container
[ https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-12794: --- Attachment: HDFS-12794-HDFS-7240.012.patch > Ozone: Parallelize ChunkOutputSream Writes to container > --- > > Key: HDFS-12794 > URL: https://issues.apache.org/jira/browse/HDFS-12794 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-12794-HDFS-7240.001.patch, > HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, > HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, > HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, > HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch, > HDFS-12794-HDFS-7240.010.patch, HDFS-12794-HDFS-7240.011.patch, > HDFS-12794-HDFS-7240.012.patch > > > The chunkOutPutStream Write are sync in nature .Once one chunk of data gets > written, the next chunk write is blocked until the previous chunk is written > to the container. > The ChunkOutputWrite Stream writes should be made async and Close on the > OutputStream should ensure flushing of all dirty buffers to the container. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13380) RBF: mv/rm fail after the directory exceeded the quota limit
Weiwei Wu created HDFS-13380: Summary: RBF: mv/rm fail after the directory exceeded the quota limit Key: HDFS-13380 URL: https://issues.apache.org/jira/browse/HDFS-13380 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Weiwei Wu It's always fail when I try to mv/rm a directory which have exceeded the quota limit. {code:java} [hadp@hadoop]$ hdfs dfsrouteradmin -ls Mount Table Entries: Source Destinations Owner Group Mode Quota/Usage /ns10t ns10->/ns10t hadp hadp rwxr-xr-x [NsQuota: 1200/1201, SsQuota: -/-] [hadp@hadoop]$ hdfs dfs -rm hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 rm: Failed to move to trash: hdfs://ns-fed/ns10t/ns1mountpoint/aa.99: The NameSpace quota (directories and files) is exceeded: quota=1200 file count=1201 [hadp@hadoop]$ hdfs dfs -rm -skipTrash hdfs://ns-fed/ns10t/ns1mountpoint/aa.99 rm: The NameSpace quota (directories and files) is exceeded: quota=1200 file count=1201 {code} I think we should add a parameter for the method *getLocationsForPath,* to determine if we need to perform quota verification on the operation. For example mv src directory parameter and rm directory parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422056#comment-16422056 ] Yiqun Lin commented on HDFS-13364: -- [~elgoiri], seemed you attached a incorrect patch and didn't include the change as mentioned. So I did a deep review based on v003 patch. Following are my review comments, almost looks great: *ConnectionPool.java* # One typo: {{Mostly based on NameNodeProxies#createNonHAProxy() but it need..}} should be {{Mostly based on NameNodeProxies#createNonHAProxy() but it needs}}. *TestRouterRpc.java* # Can we update the comment {{Client interface to the Namenode.}} to {{Client interface to the default Namenode.}}? This will look more accurate. # In test method {{testProxyVersionRequest}}, {{testProxyGetBlockKeys}} and {{testProxyGetBlocks}}, the values returned by routerNamenodeProtocol should be the actual value in {{assertEquals}} comparisons. Please attach a clean patch and address these comments as well, :). > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch, HDFS-13365.004.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13379) Failure when running dfs command on a wasb url
Jackal Tsai created HDFS-13379: -- Summary: Failure when running dfs command on a wasb url Key: HDFS-13379 URL: https://issues.apache.org/jira/browse/HDFS-13379 Project: Hadoop HDFS Issue Type: Bug Components: fs Affects Versions: 3.0.1 Environment: Ubuntu 16 Reporter: Jackal Tsai {code:java} hdfs dfs -fs wasb://hdfs@MY_ACCOUNT.blob.core.windows.net/ -ls / -ls: Fatal internal error java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azure.NativeAzureFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2559) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3254) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3286) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:225) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:460) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:249) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:232) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:176) at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:391) Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azure.NativeAzureFileSystem not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2463) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2557) ... 18 more {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422007#comment-16422007 ] Yiqun Lin commented on HDFS-13365: -- [~elgoiri], the change relevant to {{checkSuperuserPrivilege}} looks good to me. Nit: Can you remove these two comment? {noformat} // Is this by the Router user itself? // Is the user a member of the super group? {noformat} > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, > HDFS-13365.003.patch, HDFS-13365.004.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422003#comment-16422003 ] genericqa commented on HDFS-13376: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13376 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917166/HDFS-13376.001.patch | | Optional Tests | asflicense | | uname | Linux e2e51334bdf2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dc8e343 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 303 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23744/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TLS support error in Native Build of hadoop-hdfs-native-client > -- > > Key: HDFS-13376 > URL: https://issues.apache.org/jira/browse/HDFS-13376 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, documentation, native >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13376.001.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message): > [exec] FATAL ERROR: The required feature thread_local storage is not > supported by > [exec] your compiler. Known compilers that support this feature: GCC, > Visual > [exec] Studio, Clang (community version), Clang (version for iOS 9 and > later). > [exec] > [exec] > [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed > [exec] -- Configuring incomplete, errors occurred! > {noformat} > My environment: > Linux: Red Hat 4.4.7-3 > cmake: 3.8.2 > java: 1.8.0_131 > gcc: 4.4.7 > maven: 3.5.0 > Seems this is because the low version of gcc, will report after confirming > it. > Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc > version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HDFS-13376: Status: Patch Available (was: Open) > TLS support error in Native Build of hadoop-hdfs-native-client > -- > > Key: HDFS-13376 > URL: https://issues.apache.org/jira/browse/HDFS-13376 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, documentation, native >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13376.001.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message): > [exec] FATAL ERROR: The required feature thread_local storage is not > supported by > [exec] your compiler. Known compilers that support this feature: GCC, > Visual > [exec] Studio, Clang (community version), Clang (version for iOS 9 and > later). > [exec] > [exec] > [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed > [exec] -- Configuring incomplete, errors occurred! > {noformat} > My environment: > Linux: Red Hat 4.4.7-3 > cmake: 3.8.2 > java: 1.8.0_131 > gcc: 4.4.7 > maven: 3.5.0 > Seems this is because the low version of gcc, will report after confirming > it. > Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc > version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] LiXin Ge updated HDFS-13376: Attachment: HDFS-13376.001.patch > TLS support error in Native Build of hadoop-hdfs-native-client > -- > > Key: HDFS-13376 > URL: https://issues.apache.org/jira/browse/HDFS-13376 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, documentation, native >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13376.001.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message): > [exec] FATAL ERROR: The required feature thread_local storage is not > supported by > [exec] your compiler. Known compilers that support this feature: GCC, > Visual > [exec] Studio, Clang (community version), Clang (version for iOS 9 and > later). > [exec] > [exec] > [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed > [exec] -- Configuring incomplete, errors occurred! > {noformat} > My environment: > Linux: Red Hat 4.4.7-3 > cmake: 3.8.2 > java: 1.8.0_131 > gcc: 4.4.7 > maven: 3.5.0 > Seems this is because the low version of gcc, will report after confirming > it. > Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc > version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13376) TLS support error in Native Build of hadoop-hdfs-native-client
[ https://issues.apache.org/jira/browse/HDFS-13376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421988#comment-16421988 ] LiXin Ge commented on HDFS-13376: - [~James C] Thanks for your quick response and test code. This is indeed a GCC version issue as we guess, the hadoop-hdfs-native-client build success after I upgrade my GCC version to 4.8.5 from source code. As described in the [GNU website|https://gcc.gnu.org/projects/cxx-status.html#cxx11], GCC 4.8.1 was the first feature-complete implementation of the 2011 C++ standard. I have attached a patch which upgraded the {{BUILDING.txt}} to explain this, [~James C] could you please help to review it? Thanks! > TLS support error in Native Build of hadoop-hdfs-native-client > -- > > Key: HDFS-13376 > URL: https://issues.apache.org/jira/browse/HDFS-13376 > Project: Hadoop HDFS > Issue Type: Bug > Components: build, documentation, native >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13376.001.patch > > > mvn --projects hadoop-hdfs-project/hadoop-hdfs-native-client clean package > -Pdist,native -DskipTests -Dtar > {noformat} > [exec] CMake Error at main/native/libhdfspp/CMakeLists.txt:64 (message): > [exec] FATAL ERROR: The required feature thread_local storage is not > supported by > [exec] your compiler. Known compilers that support this feature: GCC, > Visual > [exec] Studio, Clang (community version), Clang (version for iOS 9 and > later). > [exec] > [exec] > [exec] -- Performing Test THREAD_LOCAL_SUPPORTED - Failed > [exec] -- Configuring incomplete, errors occurred! > {noformat} > My environment: > Linux: Red Hat 4.4.7-3 > cmake: 3.8.2 > java: 1.8.0_131 > gcc: 4.4.7 > maven: 3.5.0 > Seems this is because the low version of gcc, will report after confirming > it. > Maybe the {{BUILDING.txt}} needs update to explain the supported lowest gcc > version. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12950) [oiv] ls will fail in secure cluster
[ https://issues.apache.org/jira/browse/HDFS-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421978#comment-16421978 ] Brahma Reddy Battula commented on HDFS-12950: - OK.you can take > [oiv] ls will fail in secure cluster > - > > Key: HDFS-12950 > URL: https://issues.apache.org/jira/browse/HDFS-12950 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > > if we execute ls, it will throw following. > {noformat} > hdfs dfs -ls webhdfs://127.0.0.1:5978/ > ls: Invalid value for webhdfs parameter "op" > {noformat} > When client is configured with security (i.e "hadoop.security.authentication= > KERBEROS) , > then webhdfs will request getdelegation token which is not implemented and > hence it will throw “ls: Invalid value for webhdfs parameter "op"”. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org