[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]
[ https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301581#comment-15301581 ] Zhe Zhang commented on HDFS-10236: -- Thanks Rakesh for the work. Path LGTM overall. A few issues: # I think {{addExpectedReplicasToPending}} means adding the actual expected replicas (instead of *number of* expected replicas) to {{pendingReconstruction}}. So I don't think we should change this name at this stage (as we discussed, *replica* is difficult to rename and we should leave it until later). # Similarly, the comment "// do not schedule more if enough redundancy is already pending" doesn't read so well IMO. Maybe keeping it at this stage is better. # {{int curExpectedReplicas = blockManager.getExpectedRedundancyNum(block);}} a little inconsistent. Maybe rename the variable to {{curExpectedRedundancy}}? +1 after appending. > Erasure Coding: Rename replication-based names in BlockManager to more > generic [part-3] > --- > > Key: HDFS-10236 > URL: https://issues.apache.org/jira/browse/HDFS-10236 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch > > > The idea of this jira is to rename the following entity in BlockManager as, > {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]
[ https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301581#comment-15301581 ] Zhe Zhang edited comment on HDFS-10236 at 5/26/16 5:56 AM: --- Thanks Rakesh for the work. Path LGTM overall. A few issues: # I think {{addExpectedReplicasToPending}} means adding the actual expected replicas (instead of *number of* expected replicas) to {{pendingReconstruction}}. So I don't think we should change this name at this stage (as we discussed, *replica* is difficult to rename and we should leave it until later). # Similarly, the comment "// do not schedule more if enough redundancy is already pending" doesn't read so well IMO. Maybe keeping it at this stage is better. # {{int curExpectedReplicas = blockManager.getExpectedRedundancyNum(block);}} a little inconsistent. Maybe rename the variable to {{curExpectedRedundancy}}? +1 after addressing the above. was (Author: zhz): Thanks Rakesh for the work. Path LGTM overall. A few issues: # I think {{addExpectedReplicasToPending}} means adding the actual expected replicas (instead of *number of* expected replicas) to {{pendingReconstruction}}. So I don't think we should change this name at this stage (as we discussed, *replica* is difficult to rename and we should leave it until later). # Similarly, the comment "// do not schedule more if enough redundancy is already pending" doesn't read so well IMO. Maybe keeping it at this stage is better. # {{int curExpectedReplicas = blockManager.getExpectedRedundancyNum(block);}} a little inconsistent. Maybe rename the variable to {{curExpectedRedundancy}}? +1 after appending. > Erasure Coding: Rename replication-based names in BlockManager to more > generic [part-3] > --- > > Key: HDFS-10236 > URL: https://issues.apache.org/jira/browse/HDFS-10236 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch > > > The idea of this jira is to rename the following entity in BlockManager as, > {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster
[ https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10458: - Component/s: namenode encryption > getFileEncryptionInfo should return quickly for non-encrypted cluster > - > > Key: HDFS-10458 > URL: https://issues.apache.org/jira/browse/HDFS-10458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, namenode >Affects Versions: 2.6.0 >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-10458.00.patch > > > {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks > if the path belongs to an EZ. For a busy system with potentially many listing > operations, this could cause locking contention. > I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to > return whether the system has any EZ. If no EZ at all, > {{getFileEncryptionInfo}} should return null without {{readLock}}. > If {{hasEncryptionZone}} is only used in the above scenario, maybe itself > doesn't need a {{readLock}} -- if the system doesn't have any EZ when > {{getFileEncryptionInfo}} is called on a path, it means the path cannot be > encrypted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301563#comment-15301563 ] Hadoop QA commented on HDFS-10466: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client: patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 29s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806291/HDFS-10466.001.patch | | JIRA Issue | HDFS-10466 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d53ec1fc3267 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4f513a4 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15574/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15574/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15574/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL:
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301560#comment-15301560 ] Colin Patrick McCabe commented on HDFS-7240: bq. [~szetszwo] wrote: I seem to recall that you got your committership by contributing the symlink feature, however, the symlink feature is still not working as of today. Why don't you fix it? I think you want to build up a good track record for yourself. [~andrew.wang] did not get his commitership by contributing the symlink feature. By the time he was elected as a committer, he had contributed a system for efficiently storing and reporting high-percentile metrics, an API to expose disk location information to advanced HDFS clients, converted all remaining JUnit 3 HDFS tests to JUnit 4, and added symlink support to FileSystem. The last one was just contributing a new API to the FileSystem class, not implementing the symlink feature itself. You are probably thinking of [~eli], who became a committer partly by working on HDFS symlinks. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Yu updated HDFS-10466: --- Attachment: HDFS-10466.001.patch > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL: https://issues.apache.org/jira/browse/HDFS-10466 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Juan Yu >Assignee: Juan Yu >Priority: Minor > Attachments: HDFS-10466.001.patch, HDFS-10466.patch > > > https://issues.apache.org/jira/browse/HDFS-202 added a new API > listLocatedStatus() to get all files' status with block locations for a > directory. This is great that we don't need to call > FileSystem.getFileBlockLocations() for each file. it's much faster (about > 8-10 times). > However, the returned LocatedFileStatus only contains basic BlockLocation > instead of HdfsBlockLocation, the LocatedBlock details are stripped out. > It should do the similar as DFSClient.getBlockLocations(), return > HdfsBlockLocation which provide full block location details. > The implementation of DistributedFileSystem. listLocatedStatus() retrieves > HdfsLocatedFileStatus which contains all information, but when convert it to > LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and > compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301465#comment-15301465 ] Aaron T. Myers commented on HDFS-10463: --- [~templedf] - trunk patch looks pretty good to me, but seems like the branch-2 patch had this very test fail. I'll be +1 once that's addressed. > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10433) Make retry also works well for Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301452#comment-15301452 ] Hadoop QA commented on HDFS-10433: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s {color} | {color:red} root: patch generated 26 new + 224 unchanged - 6 fixed = 250 total (was 230) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 7s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 2s {color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 111m 19s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806273/h10433_20160525b.patch | | JIRA Issue | HDFS-10433 | | Optional Tests | asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 595da50cdba9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 013532a | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15572/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15572/testReport/ | | modules | C:
[jira] [Commented] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301435#comment-15301435 ] Hadoop QA commented on HDFS-10431: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 new + 1 unchanged - 4 fixed = 1 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 0s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 25s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806279/HDFS-10431-HDFS-9924.001.patch | | JIRA Issue | HDFS-10431 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 32e50e4b5f73 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 013532a | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15573/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HDFS-Build/15573/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15573/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15573/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 >
[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301432#comment-15301432 ] Rakesh R commented on HDFS-10434: - Thanks [~drankye] for the final reviews and committing the patch. > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301330#comment-15301330 ] Hadoop QA commented on HDFS-10463: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 31s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 45s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} branch-2 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s {color} | {color:red} root: patch generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 24s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 14s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 32s {color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 57s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s {color} | {color:green} Patch does not generate ASF License warnings. {color} | |
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301324#comment-15301324 ] Bikas Saha commented on HDFS-7240: -- In case there is a conference call, please send an email to hdfs-dev with the proposed meeting details for wider dispersal and participation since that is the right forum to organize community activities. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-10431: --- Priority: Minor (was: Major) Component/s: (was: hdfs) test > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou >Priority: Minor > Attachments: HDFS-10431-HDFS-9924.000.patch, > HDFS-10431-HDFS-9924.001.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10466: --- Target Version/s: 2.9.0 > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL: https://issues.apache.org/jira/browse/HDFS-10466 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Juan Yu >Assignee: Juan Yu >Priority: Minor > Attachments: HDFS-10466.patch > > > https://issues.apache.org/jira/browse/HDFS-202 added a new API > listLocatedStatus() to get all files' status with block locations for a > directory. This is great that we don't need to call > FileSystem.getFileBlockLocations() for each file. it's much faster (about > 8-10 times). > However, the returned LocatedFileStatus only contains basic BlockLocation > instead of HdfsBlockLocation, the LocatedBlock details are stripped out. > It should do the similar as DFSClient.getBlockLocations(), return > HdfsBlockLocation which provide full block location details. > The implementation of DistributedFileSystem. listLocatedStatus() retrieves > HdfsLocatedFileStatus which contains all information, but when convert it to > LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and > compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301286#comment-15301286 ] Xiaobing Zhou commented on HDFS-10431: -- v001 fixed check style issues. > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch, > HDFS-10431-HDFS-9924.001.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10431: - Attachment: HDFS-10431-HDFS-9924.001.patch > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch, > HDFS-10431-HDFS-9924.001.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10433) Make retry also works well for Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-10433: --- Attachment: h10433_20160525b.patch h10433_20160525b.patch: fixes more warnings. Note that the indentation checkstyle warnings are bogus. > Make retry also works well for Async DFS > > > Key: HDFS-10433 > URL: https://issues.apache.org/jira/browse/HDFS-10433 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Tsz Wo Nicholas Sze > Attachments: h10433_20160524.patch, h10433_20160525.patch, > h10433_20160525b.patch > > > In current Async DFS implementation, file system calls are invoked and > returns Future immediately to clients. Clients call Future#get to retrieve > final results. Future#get internally invokes a chain of callbacks residing in > ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The > callback path bypasses the original retry layer/logic designed for > synchronous DFS. This proposes refactoring to make retry also works for Async > DFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301254#comment-15301254 ] Anu Engineer commented on HDFS-7240: Thank you, I have updated the JIRA and assigned this back to Jitendra > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-7240: --- Attachment: Ozonedesignupdate.pdf Hi All, I have attached the ozone design update. Hopefully this addresses the concerns expressed by [~andrew.wang]. My apologies for the delay. I am also hoping that this will take us back to ozone's technical issues, and I would like to host a call if anyone would like to discuss this in greater depth. [~andrew.wang] [~zhz] [~cmccabe] [~drankye] I would like to respond to the technical issues you have raised in this JIRA once you get time to read thru this design update and we all have a shared understanding of current state of ozone. I would like to reassure you all that this is a design proposal and very much open to change. I would love to discuss the merits of this proposal and would love to see more community engagement and participation in ozone. Please do let me know if I can do anything more to address that concern. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Anu Engineer > Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-7240: --- Assignee: Jitendra Nath Pandey (was: Anu Engineer) > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, > ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10433) Make retry also works well for Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301244#comment-15301244 ] Hadoop QA commented on HDFS-10433: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s {color} | {color:red} root: patch generated 30 new + 225 unchanged - 6 fixed = 255 total (was 231) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s {color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 52s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 125m 44s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Wait not in loop in org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(Object, long, TimeUnit) At AsyncGet.java:org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(Object, long, TimeUnit) At AsyncGet.java:[line 56] | | Failed junit tests | hadoop.hdfs.TestAsyncDFSRename | | | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806249/h10433_20160525.patch | | JIRA Issue | HDFS-10433 | | Optional Tests | asflicense xml compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 616d6bd3c87a 3.13.0-36-lowlatency #63-Ubuntu SMP
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301243#comment-15301243 ] Jing Zhao commented on HDFS-7240: - Looks like contributors do not have permission to attach files anymore ? I assign the jira to [~anu] so that he can upload the updated design doc. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Anu Engineer > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-7240: Assignee: Anu Engineer (was: Jitendra Nath Pandey) > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Anu Engineer > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301227#comment-15301227 ] Hadoop QA commented on HDFS-10431: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 1 unchanged - 4 fixed = 4 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 62m 29s {color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 34s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806257/HDFS-10431-HDFS-9924.000.patch | | JIRA Issue | HDFS-10431 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b0f249741855 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3c83cee | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15570/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15570/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15570/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch > > > 1. Move irrelevant parts out of
[jira] [Commented] (HDFS-8057) Move BlockReader implementation to the client implementation package
[ https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301206#comment-15301206 ] Takanobu Asanuma commented on HDFS-8057: Thank you for reviewing and committing, Nicholas! > Move BlockReader implementation to the client implementation package > > > Key: HDFS-8057 > URL: https://issues.apache.org/jira/browse/HDFS-8057 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Tsz Wo Nicholas Sze >Assignee: Takanobu Asanuma > Fix For: 2.8.0 > > Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, > HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, > HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.003.patch, > HDFS-8057.branch-2.5.patch > > > BlockReaderLocal, RemoteBlockReader, etc should be moved to > org.apache.hadoop.hdfs.client.impl. We may as well rename RemoteBlockReader > to BlockReaderRemote. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301191#comment-15301191 ] Hadoop QA commented on HDFS-10463: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s {color} | {color:red} root: patch generated 2 new + 11 unchanged - 0 fixed = 13 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 40s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 40s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 111m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | | hadoop.hdfs.TestDFSUpgradeFromImage | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806240/HDFS-10463.001.patch | | JIRA Issue | HDFS-10463 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7ec3d98ac29f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3c83cee | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15567/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15567/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs |
[jira] [Commented] (HDFS-9547) DiskBalancer : Add user documentation
[ https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301187#comment-15301187 ] Lei (Eddy) Xu commented on HDFS-9547: - Hi, Anu, This document looks good to me. +1 > DiskBalancer : Add user documentation > - > > Key: HDFS-9547 > URL: https://issues.apache.org/jira/browse/HDFS-9547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: Anu Engineer >Assignee: Anu Engineer > Attachments: HDFS-9547-HDFS-1312.001.patch, > HDFS-9547-HDFS-1312.002.patch > > > Write diskbalancer.md since this is a new tool and explain the usage with > examples. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301176#comment-15301176 ] Hadoop QA commented on HDFS-10466: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} | {color:red} HDFS-10466 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806262/HDFS-10466.patch | | JIRA Issue | HDFS-10466 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15571/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL: https://issues.apache.org/jira/browse/HDFS-10466 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Juan Yu >Assignee: Juan Yu >Priority: Minor > Attachments: HDFS-10466.patch > > > https://issues.apache.org/jira/browse/HDFS-202 added a new API > listLocatedStatus() to get all files' status with block locations for a > directory. This is great that we don't need to call > FileSystem.getFileBlockLocations() for each file. it's much faster (about > 8-10 times). > However, the returned LocatedFileStatus only contains basic BlockLocation > instead of HdfsBlockLocation, the LocatedBlock details are stripped out. > It should do the similar as DFSClient.getBlockLocations(), return > HdfsBlockLocation which provide full block location details. > The implementation of DistributedFileSystem. listLocatedStatus() retrieves > HdfsLocatedFileStatus which contains all information, but when convert it to > LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and > compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301172#comment-15301172 ] Tsz Wo Nicholas Sze commented on HDFS-7240: --- [~andrew.wang], I understand you like to contribute to this issue. However, why don't you fix HDFS symlink first? It is also a very useful and important feature. It is one of the most wanted feature. Many people are asking for it. I seem to recall that you got your committership by contributing the symlink feature, however, the symlink feature is still not working as of today. Why don't you fix it? I think you want to build up a good track record for yourself. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Yu updated HDFS-10466: --- Status: Patch Available (was: Open) Ran all related unit tests and they passed. Also verified my application can get HdfsBlockLocation successfully. > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL: https://issues.apache.org/jira/browse/HDFS-10466 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Juan Yu >Assignee: Juan Yu >Priority: Minor > Attachments: HDFS-10466.patch > > > https://issues.apache.org/jira/browse/HDFS-202 added a new API > listLocatedStatus() to get all files' status with block locations for a > directory. This is great that we don't need to call > FileSystem.getFileBlockLocations() for each file. it's much faster (about > 8-10 times). > However, the returned LocatedFileStatus only contains basic BlockLocation > instead of HdfsBlockLocation, the LocatedBlock details are stripped out. > It should do the similar as DFSClient.getBlockLocations(), return > HdfsBlockLocation which provide full block location details. > The implementation of DistributedFileSystem. listLocatedStatus() retrieves > HdfsLocatedFileStatus which contains all information, but when convert it to > LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and > compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
[ https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Yu updated HDFS-10466: --- Attachment: HDFS-10466.patch > DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation > instead of BlockLocation > -- > > Key: HDFS-10466 > URL: https://issues.apache.org/jira/browse/HDFS-10466 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Juan Yu >Assignee: Juan Yu >Priority: Minor > Attachments: HDFS-10466.patch > > > https://issues.apache.org/jira/browse/HDFS-202 added a new API > listLocatedStatus() to get all files' status with block locations for a > directory. This is great that we don't need to call > FileSystem.getFileBlockLocations() for each file. it's much faster (about > 8-10 times). > However, the returned LocatedFileStatus only contains basic BlockLocation > instead of HdfsBlockLocation, the LocatedBlock details are stripped out. > It should do the similar as DFSClient.getBlockLocations(), return > HdfsBlockLocation which provide full block location details. > The implementation of DistributedFileSystem. listLocatedStatus() retrieves > HdfsLocatedFileStatus which contains all information, but when convert it to > LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and > compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301142#comment-15301142 ] Tsz Wo Nicholas Sze commented on HDFS-9924: --- > ... I'm really hesitant for a feature that makes it trivial to destroy a NN. I understand you concern but it is a different problem. We should not protect NN by making the client slow. We should add protection in NN instead. For example, we recently implemented RPC scheduler/callqueue backoff using response times (HADOOP-12916). > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: AsyncHdfs20160510.pdf > > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301129#comment-15301129 ] Tsz Wo Nicholas Sze commented on HDFS-9924: --- > Nicholas, I proposed two solutions above, neither of which you have commented > on ... As mentioned previously, please have the API discussion in HADOOP-12910. Thanks. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: AsyncHdfs20160510.pdf > > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10408) Add out-of-order tests for async DFS API
[ https://issues.apache.org/jira/browse/HDFS-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10408: - Resolution: Duplicate Status: Resolved (was: Patch Available) Resolved this as duplicate since HDFS-10446 interleaving tests evidently should cover out-of-order response retrieval. > Add out-of-order tests for async DFS API > > > Key: HDFS-10408 > URL: https://issues.apache.org/jira/browse/HDFS-10408 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10408-HDFS-9924.000.patch > > > HDFS-10224 and HDFS-10346 mostly test the batch style async request/response. > Out-of-order case (i.e. out of order retrieval of response) should also be > tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-10444) Refactor tests by moving irrevelant parts out of TestAsyncDFSRename
[ https://issues.apache.org/jira/browse/HDFS-10444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou resolved HDFS-10444. -- Resolution: Duplicate Mark this as duplicate since it's been addressed in HDFS-10431. > Refactor tests by moving irrevelant parts out of TestAsyncDFSRename > --- > > Key: HDFS-10444 > URL: https://issues.apache.org/jira/browse/HDFS-10444 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > TestAsyncDFSRename contains many tests related to setPermission, setOwner and > so on. They should be moved TestAsyncDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10431: - Status: Patch Available (was: Open) > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10431: - Summary: Refactor tests of Async DFS (was: Refactor Async DFS related tests to reuse shared instance of AsyncDistributedFileSystem instance to speed up tests) > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301105#comment-15301105 ] Xiaobing Zhou commented on HDFS-10431: -- v000 patch is posted for review. > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared instance of AsyncDistributedFileSystem instance to speed up tests
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10431: - Description: 1. Move irrelevant parts out of TestAsyncDFSRename 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached in ipc.Client. Client instances are cached based on SocketFactory. In order to test different cases in various limits, every test (e.g. TestAsyncDFSRename and TestAsyncDFS) creates separate instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not efficient in that tests may take long time to bootstrap MiniDFSClusters. It's even worse if cluster needs to restart in the middle. This proposes to do refactoring to use shared instance of AsyncDistributedFileSystem for speedup. was:Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached in ipc.Client. Client instances are cached based on SocketFactory. In order to test different cases in various limits, every test (e.g. TestAsyncDFSRename and TestAsyncDFS) creates separate instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not efficient in that tests may take long time to bootstrap MiniDFSClusters. It's even worse if cluster needs to restart in the middle. This proposes to do refactoring to use shared instance of AsyncDistributedFileSystem for speedup. > Refactor Async DFS related tests to reuse shared instance of > AsyncDistributedFileSystem instance to speed up tests > -- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: HDFS-10463.branch-2.001.patch Here's a branch-2 patch. > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10431: - Attachment: HDFS-10431-HDFS-9924.000.patch > Refactor tests of Async DFS > --- > > Key: HDFS-10431 > URL: https://issues.apache.org/jira/browse/HDFS-10431 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10431-HDFS-9924.000.patch > > > 1. Move irrelevant parts out of TestAsyncDFSRename > 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and > cached in ipc.Client. Client instances are cached based on SocketFactory. In > order to test different cases in various limits, every test (e.g. > TestAsyncDFSRename and TestAsyncDFS) creates separate instance of > MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not > efficient in that tests may take long time to bootstrap MiniDFSClusters. It's > even worse if cluster needs to restart in the middle. This proposes to do > refactoring to use shared instance of AsyncDistributedFileSystem for speedup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers reassigned HDFS-10463: - Assignee: Daniel Templeton > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10433) Make retry also works well for Async DFS
[ https://issues.apache.org/jira/browse/HDFS-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-10433: --- Attachment: h10433_20160525.patch h10433_20160525.patch: fixes test failures and warnings. > Make retry also works well for Async DFS > > > Key: HDFS-10433 > URL: https://issues.apache.org/jira/browse/HDFS-10433 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Xiaobing Zhou >Assignee: Tsz Wo Nicholas Sze > Attachments: h10433_20160524.patch, h10433_20160525.patch > > > In current Async DFS implementation, file system calls are invoked and > returns Future immediately to clients. Clients call Future#get to retrieve > final results. Future#get internally invokes a chain of callbacks residing in > ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The > callback path bypasses the original retry layer/logic designed for > synchronous DFS. This proposes refactoring to make retry also works for Async > DFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation
Juan Yu created HDFS-10466: -- Summary: DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation Key: HDFS-10466 URL: https://issues.apache.org/jira/browse/HDFS-10466 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: Juan Yu Assignee: Juan Yu Priority: Minor https://issues.apache.org/jira/browse/HDFS-202 added a new API listLocatedStatus() to get all files' status with block locations for a directory. This is great that we don't need to call FileSystem.getFileBlockLocations() for each file. it's much faster (about 8-10 times). However, the returned LocatedFileStatus only contains basic BlockLocation instead of HdfsBlockLocation, the LocatedBlock details are stripped out. It should do the similar as DFSClient.getBlockLocations(), return HdfsBlockLocation which provide full block location details. The implementation of DistributedFileSystem. listLocatedStatus() retrieves HdfsLocatedFileStatus which contains all information, but when convert it to LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and compatible) change to make to keep the LocatedBlock details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: (was: HDFS-10463.001.patch) > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: HDFS-10463.001.patch > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10464) libhdfs++: Implement GetPathInfo
Bob Hansen created HDFS-10464: - Summary: libhdfs++: Implement GetPathInfo Key: HDFS-10464 URL: https://issues.apache.org/jira/browse/HDFS-10464 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bob Hansen -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10465) libhdfs++: Implement ListDirectory
Bob Hansen created HDFS-10465: - Summary: libhdfs++: Implement ListDirectory Key: HDFS-10465 URL: https://issues.apache.org/jira/browse/HDFS-10465 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bob Hansen -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300936#comment-15300936 ] Hadoop QA commented on HDFS-10459: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-10459 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806230/HDFS-10459-b2.7.003.patch | | JIRA Issue | HDFS-10459 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15566/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459-b2.7.002.patch, HDFS-10459-b2.7.003.patch, > HDFS-10459.001.patch, HDFS-10459.002.patch > > > GetTurnOffTip overstates the number of blocks necessary to come out of safe > mode by 1 due to an arbitrary '+1' in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Attachment: HDFS-10459-b2.7.003.patch Getting rid of '+1' in GetTurnOffTip calculation so that the log message is correct. > getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459-b2.7.002.patch, HDFS-10459-b2.7.003.patch, > HDFS-10459.001.patch, HDFS-10459.002.patch > > > GetTurnOffTip overstates the number of blocks necessary to come out of safe > mode by 1 due to an arbitrary '+1' in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Affects Version/s: (was: 2.9.0) 2.7.0 Description: GetTurnOffTip overstates the number of blocks necessary to come out of safe mode by 1 due to an arbitrary '+1' in the code. (was: The computation works on threshold = 1, but not on threshold < 1. I propose making blockThreshold equal to the ceiling of total*threshold. Since we need to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, since blocks work in integer values. ) Summary: getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7 (was: getTurnOffTip computes needed block incorrectly for threshold < 1) > getTurnOffTip computes needed block incorrectly for threshold < 1 in b2.7 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459-b2.7.002.patch, HDFS-10459.001.patch, > HDFS-10459.002.patch > > > GetTurnOffTip overstates the number of blocks necessary to come out of safe > mode by 1 due to an arbitrary '+1' in the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300896#comment-15300896 ] Hadoop QA commented on HDFS-10463: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-10463 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806224/HDFS-10463.001.patch | | JIRA Issue | HDFS-10463 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15565/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: HDFS-10463.001.patch > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Status: Patch Available (was: Open) > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: HDFS-10463.001.patch > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Affects Version/s: 2.9.0 > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
[ https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HDFS-10463: Attachment: (was: HDFS-10463.001.patch) > TestRollingFileSystemSinkWithHdfs needs some cleanup > > > Key: HDFS-10463 > URL: https://issues.apache.org/jira/browse/HDFS-10463 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Daniel Templeton >Priority: Critical > Attachments: HDFS-10463.001.patch > > > There are three primary issues. The most significant is that the > {{testFlushThread()}} method doesn't clean up after itself, which can cause > other tests to fail. The other big issue is that the {{testSilentAppend()}} > method is testing the wrong thing. An additional minor issue is that none of > the tests are careful about making sure the metrics system gets shutdown in > all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300840#comment-15300840 ] Hudson commented on HDFS-10434: --- SUCCESS: Integrated in Hadoop-trunk-Commit #9860 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9860/]) HDFS-10434. Fix intermittent test failure of (kai.zheng: rev f69f5ab3b6964b9124c07c97f13141227d5b87b9) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeErasureCodingMetrics.java > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup
Daniel Templeton created HDFS-10463: --- Summary: TestRollingFileSystemSinkWithHdfs needs some cleanup Key: HDFS-10463 URL: https://issues.apache.org/jira/browse/HDFS-10463 Project: Hadoop HDFS Issue Type: Bug Reporter: Daniel Templeton Priority: Critical Attachments: HDFS-10463.001.patch There are three primary issues. The most significant is that the {{testFlushThread()}} method doesn't clean up after itself, which can cause other tests to fail. The other big issue is that the {{testSilentAppend()}} method is testing the wrong thing. An additional minor issue is that none of the tests are careful about making sure the metrics system gets shutdown in all cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys
[ https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Atul Sikaria updated HDFS-10462: Assignee: (was: Atul Sikaria) Description: Current OAuth2 support (used by HADOOP-12666) supports getting a token using client creds. However, the client creds support does not pass the "resource" parameter required by Azure AD. This work adds support for the "resource" parameter when acquring the OAuth2 token from Azure AD, so the client credentials can be used to authenticate to Azure Data Lake. (was: Current OAuth2 support (used by HADOOP-12666) supports getting a token using a refresh token, or using client creds. However, the client creds support does not pass the "resource" parameter required by Azure AD. This work adds support for the "resource" parameter when acquring the OAuth2 token from Azure AD, so the client credentials can be used to authenticate to Azure Data Lake. ) > Authenticate to Azure Data Lake using client ID and keys > > > Key: HDFS-10462 > URL: https://issues.apache.org/jira/browse/HDFS-10462 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client > Environment: All >Reporter: Atul Sikaria > Original Estimate: 168h > Remaining Estimate: 168h > > Current OAuth2 support (used by HADOOP-12666) supports getting a token using > client creds. However, the client creds support does not pass the "resource" > parameter required by Azure AD. This work adds support for the "resource" > parameter when acquring the OAuth2 token from Azure AD, so the client > credentials can be used to authenticate to Azure Data Lake. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys
[ https://issues.apache.org/jira/browse/HDFS-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HDFS-10462: - Assignee: Atul Sikaria > Authenticate to Azure Data Lake using client ID and keys > > > Key: HDFS-10462 > URL: https://issues.apache.org/jira/browse/HDFS-10462 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client > Environment: All >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Original Estimate: 168h > Remaining Estimate: 168h > > Current OAuth2 support (used by HADOOP-12666) supports getting a token using > a refresh token, or using client creds. However, the client creds support > does not pass the "resource" parameter required by Azure AD. This work adds > support for the "resource" parameter when acquring the OAuth2 token from > Azure AD, so the client credentials can be used to authenticate to Azure Data > Lake. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10462) Authenticate to Azure Data Lake using client ID and keys
Atul Sikaria created HDFS-10462: --- Summary: Authenticate to Azure Data Lake using client ID and keys Key: HDFS-10462 URL: https://issues.apache.org/jira/browse/HDFS-10462 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Environment: All Reporter: Atul Sikaria Current OAuth2 support (used by HADOOP-12666) supports getting a token using a refresh token, or using client creds. However, the client creds support does not pass the "resource" parameter required by Azure AD. This work adds support for the "resource" parameter when acquring the OAuth2 token from Azure AD, so the client credentials can be used to authenticate to Azure Data Lake. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300824#comment-15300824 ] Kai Zheng commented on HDFS-10434: -- Thanks [~rakeshr] for the contribution! Committed to trunk. > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-10434: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha1 Status: Resolved (was: Patch Available) > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300818#comment-15300818 ] Kai Zheng commented on HDFS-10434: -- +1 on the latest patch and will commit it shortly. > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package
[ https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-8057: -- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have committed the branch-2 patch. Thanks again Asanuma-san! > Move BlockReader implementation to the client implementation package > > > Key: HDFS-8057 > URL: https://issues.apache.org/jira/browse/HDFS-8057 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Tsz Wo Nicholas Sze >Assignee: Takanobu Asanuma > Fix For: 2.8.0 > > Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, > HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, > HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.003.patch, > HDFS-8057.branch-2.5.patch > > > BlockReaderLocal, RemoteBlockReader, etc should be moved to > org.apache.hadoop.hdfs.client.impl. We may as well rename RemoteBlockReader > to BlockReaderRemote. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300602#comment-15300602 ] Jing Zhao commented on HDFS-7240: - Talking about EC in ozone, I had a general discussion with [~drankye] last week while he was visiting us. We think ozone's storage container layer can make EC work easier and more clean, especially considering we're planning the EC phase II, i.e., to do EC in an offline mode. Fundamentally EC/replication should be handled in the storage layer (i.e., the block of HDFS, and the storage container in ozone) as two options for maintaining data's durability. A ozone's storage container will have the capability to support both. The general design to support EC in ozone can be very similar to some existing object store such as [magic pocket | https://blogs.dropbox.com/tech/2016/05/inside-the-magic-pocket/]. We can have more detailed discussion about the design and finally have a section in the design doc, but I do not think to support EC can become a hurdle for us. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300565#comment-15300565 ] Anu Engineer commented on HDFS-7240: [~steve_l] [~drankye] We will certainly have a call to discuss the design once a detailed design doc is posted. [~andrew.wang] Thanks for your comments. bq. Here, I'm also trying to be as constructive as possible, raising questions as well as proposing possible solutions. I appreciate the spirit and rest assured that we really appreciate you raising questions. It is just that writing a design doc takes a little time. bq. We discussed the need for range instead of hash partitioning (which I'm happy to see made it), as well as the overhead of doing metadata and data lookups (which could motivate storing Ozone metadata in Raft instead of in a container). This has been my sentiment all along, that we have been listening to the community feedback and making changes. we will certainly do the same going forward. I look forward to your comments and thoughts on the ozone once we post the design doc. [~zhz] [~cmccabe] and [~andrew.wang] I would like to discuss the technical issues that have been raised in this JIRA after I post the design doc. It will allow us to have a shared understanding of where we are and will eliminate lot of repetition. I personally believe it would be much more productive to have the discussion once we all have a shared view of the issues and suggested solutions. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300556#comment-15300556 ] Hadoop QA commented on HDFS-10459: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 38 unchanged - 0 fixed = 41 total (was 38) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 52s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 55s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestFSImage | | | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode | | | hadoop.hdfs.server.namenode.ha.TestHASafeMode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806155/HDFS-10459.002.patch | | JIRA Issue | HDFS-10459 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3e9d747714ab 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9a31e5d | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15563/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15563/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs |
[jira] [Commented] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300554#comment-15300554 ] Hadoop QA commented on HDFS-10459: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} | {color:red} HDFS-10459 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806172/HDFS-10459-b2.7.002.patch | | JIRA Issue | HDFS-10459 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15564/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > getTurnOffTip computes needed block incorrectly for threshold < 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459-b2.7.002.patch, HDFS-10459.001.patch, > HDFS-10459.002.patch > > > The computation works on threshold = 1, but not on threshold < 1. I propose > making blockThreshold equal to the ceiling of total*threshold. Since we need > to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, > since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300552#comment-15300552 ] Jitendra Nath Pandey commented on HDFS-7240: Of course the design is flexible and the project would benefit from a constructive discussion here. As repeatedly mentioned before, an updated document will be posted soon and that essentially means to discuss any input and concerns. No design gets frozen until it is implemented. All the implementation so far is in the jiras, and will continue to be so. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Attachment: HDFS-10459-b2.7.002.patch Attaching branch-2.7 patch. > getTurnOffTip computes needed block incorrectly for threshold < 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459-b2.7.002.patch, HDFS-10459.001.patch, > HDFS-10459.002.patch > > > The computation works on threshold = 1, but not on threshold < 1. I propose > making blockThreshold equal to the ceiling of total*threshold. Since we need > to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, > since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10457) DataNode should not auto-format block pool directory if VERSION is missing
[ https://issues.apache.org/jira/browse/HDFS-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300542#comment-15300542 ] Wei-Chiu Chuang commented on HDFS-10457: I tested my patch on a cluster and it does prevent auto-formatting. However, unlike a failed volume, a failed block pool does not report to DataNode/NameNode JMX other than the log. I wonder if we should also make some changes to improve warning if a block pool fails to load. > DataNode should not auto-format block pool directory if VERSION is missing > -- > > Key: HDFS-10457 > URL: https://issues.apache.org/jira/browse/HDFS-10457 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-10457.001.patch > > > HDFS-10360 prevents DN to auto-formats a volume directory if the > current/VERSION is missing. However, if instead, the current/VERSION in a > block pool directory is missing, DN still auto-formats the directory. > Filing this jira to fix the bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300537#comment-15300537 ] Rakesh R commented on HDFS-9833: [~drankye], I have created the follow-on tasks HDFS-10460 and HDFS-10461. I will start working on these after the basic patch in this jira is committed. > Erasure coding: recomputing block checksum on the fly by reconstructing the > missed/corrupt block data > - > > Key: HDFS-9833 > URL: https://issues.apache.org/jira/browse/HDFS-9833 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rakesh R > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch, > HDFS-9833-02.patch, HDFS-9833-03.patch, HDFS-9833-04.patch, > HDFS-9833-05.patch, HDFS-9833-06.patch > > > As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum > even some of striped blocks are missed, we need to consider recomputing block > checksum on the fly for the missed/corrupt blocks. To recompute the block > checksum, the block data needs to be reconstructed by erasure decoding, and > the main needed codes for the block reconstruction could be borrowed from > HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC > worker, reconstructed blocks need to be written out to target datanodes, but > here in this case, the remote writing isn't necessary, as the reconstructed > block data is only used to recompute the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10461) Erasure Coding: Optimize block checksum recalculation logic on the fly by reconstructing multiple missed blocks at a time
Rakesh R created HDFS-10461: --- Summary: Erasure Coding: Optimize block checksum recalculation logic on the fly by reconstructing multiple missed blocks at a time Key: HDFS-10461 URL: https://issues.apache.org/jira/browse/HDFS-10461 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Rakesh R Assignee: Rakesh R This is HDFS-9833 follow-on task. HDFS-9833 is recomputing only one block checksum a time. The reconstruction logic can be further optimized by reconstructing multiple blocks at a time. There are several case to be considered like, case-1) Live Block indices : {{0, 4, 5, 6, 7, 8}} - consecutive missing data blocks 1, 2, 3 case-2) Live Block indices : {{0, 2, 4, 6, 7, 8}} - jumbled missing data blocks 1, 3, 5 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block
Rakesh R created HDFS-10460: --- Summary: Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block Key: HDFS-10460 URL: https://issues.apache.org/jira/browse/HDFS-10460 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Reporter: Rakesh R Assignee: Rakesh R This jira is HDFS-9833 follow-on task to address reconstructing block and then recalculating block checksum for a particular range query. For example, {code} // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 65536 * 6 = 393216 FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300521#comment-15300521 ] Steve Loughran commented on HDFS-7240: -- +1 for some meetup, an online hangout webex would be good for remote people like me to catch up. arguing with each other over a JIRA isn't the way to review designs or their implementations > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300518#comment-15300518 ] Andrew Wang commented on HDFS-7240: --- Sorry, hit reply too early. Quoting from my earlier response to Anu: bq. Really though, even if the community hadn't explicitly expressed interest, all of this activity should still have been done in a public forum. It's very hard for newcomers to ramp up unless design discussions are being done publicly. This is how software is supposed to be developed at Apache, so everyone can watch and contribute. It's not a reasonable standard to require each of the 160 watchers on this JIRA to explicitly reach out to be involved in the conversation. And, like I said above, it's very hard to contribute at this conversation is happening publicly. I'm a bit annoyed here since we did reach out in late Feb, and we had a nice design convo. We discussed the need for range instead of hash partitioning (which I'm happy to see made it), as well as the overhead of doing metadata and data lookups (which could motivate storing Ozone metadata in Raft instead of in a container). Then, as now, I also asked to be involved in the design discussions since this is a topic I'm very interested in. Here, I'm also trying to be as constructive as possible, raising questions as well as proposing possible solutions. I keep saying this, but I would like to collaborate on this project. If you're willing to revisit some of the design points we're discussing above, we can put the past behind us and move forward. So far though it feels like I'm being rebuffed. > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-7240: Assignee: Jitendra Nath Pandey (was: Andrew Wang) > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Jitendra Nath Pandey > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-7240: -- Comment: was deleted (was: Quoting my earlier response to Anu: bq. Really though, even if the community hadn't explicitly expressed interest, all of this activity should still have been done in a public forum. It's very hard for newcomers to ramp up unless design discussions are being done publicly. TDevelopment is supposed to be done in the open so everyone can watch and contribute. Also, code is not the only form of contribution. As also mentioned above, we had a call in late February this year, which is where we discussed the need for range partitioning (something I'm glad is being done in the new design) as well as raising concerns about the number of hops to lookup and read data (which I'm guessing is why metadata is now replicated via Raft rather than stored in containers).) > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Andrew Wang > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300488#comment-15300488 ] Andrew Wang commented on HDFS-7240: --- Quoting my earlier response to Anu: bq. Really though, even if the community hadn't explicitly expressed interest, all of this activity should still have been done in a public forum. It's very hard for newcomers to ramp up unless design discussions are being done publicly. TDevelopment is supposed to be done in the open so everyone can watch and contribute. Also, code is not the only form of contribution. As also mentioned above, we had a call in late February this year, which is where we discussed the need for range partitioning (something I'm glad is being done in the new design) as well as raising concerns about the number of hops to lookup and read data (which I'm guessing is why metadata is now replicated via Raft rather than stored in containers). > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Andrew Wang > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-7240) Object store in HDFS
[ https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reassigned HDFS-7240: - Assignee: Andrew Wang (was: Jitendra Nath Pandey) > Object store in HDFS > > > Key: HDFS-7240 > URL: https://issues.apache.org/jira/browse/HDFS-7240 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jitendra Nath Pandey >Assignee: Andrew Wang > Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf > > > This jira proposes to add object store capabilities into HDFS. > As part of the federation work (HDFS-1052) we separated block storage as a > generic storage layer. Using the Block Pool abstraction, new kinds of > namespaces can be built on top of the storage layer i.e. datanodes. > In this jira I will explore building an object store using the datanode > storage, but independent of namespace metadata. > I will soon update with a detailed design document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster
[ https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300371#comment-15300371 ] Hadoop QA commented on HDFS-10458: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 57s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 101m 10s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806138/HDFS-10458.00.patch | | JIRA Issue | HDFS-10458 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 66a6ff51d4fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9a31e5d | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15561/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HDFS-Build/15561/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/15561/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15561/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > getFileEncryptionInfo should return quickly for non-encrypted cluster >
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Affects Version/s: (was: 0.23.0) (was: 0.22.0) 2.9.0 Description: The computation works on threshold = 1, but not on threshold < 1. I propose making blockThreshold equal to the ceiling of total*threshold. Since we need to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, since blocks work in integer values. (was: The fix added in HDFS-2002 only works in cases where threshold < 1, but not when threshold = 1. There is a '+1' added to the computation because of this assumption. I propose that instead of adding a '+1', we just set blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to be >= blockThreshold to get out of safe mode, this will work. >14.9 is the same as =15, since blocks work in integer values. ) Summary: getTurnOffTip computes needed block incorrectly for threshold < 1 (was: getTurnOffTip computes needed block incorrectly for threshold = 1) > getTurnOffTip computes needed block incorrectly for threshold < 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459.001.patch > > > The computation works on threshold = 1, but not on threshold < 1. I propose > making blockThreshold equal to the ceiling of total*threshold. Since we need > to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, > since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold < 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Attachment: HDFS-10459.002.patch Rebasing to trunk. > getTurnOffTip computes needed block incorrectly for threshold < 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459.001.patch, HDFS-10459.002.patch > > > The computation works on threshold = 1, but not on threshold < 1. I propose > making blockThreshold equal to the ceiling of total*threshold. Since we need > to be >= blockThreshold to get out of safe mode, >14.9 is the same as =15, > since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300329#comment-15300329 ] Hadoop QA commented on HDFS-9833: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 21s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 111m 39s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMXBean | | | hadoop.hdfs.TestAsyncDFSRename | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806134/HDFS-9833-06.patch | | JIRA Issue | HDFS-9833 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 8ccfe771cabb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9a31e5d | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15560/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HDFS-Build/15560/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold = 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300298#comment-15300298 ] Hadoop QA commented on HDFS-10459: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} | {color:red} HDFS-10459 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806144/HDFS-10459.001.patch | | JIRA Issue | HDFS-10459 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/15562/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > getTurnOffTip computes needed block incorrectly for threshold = 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.22.0, 0.23.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459.001.patch > > > The fix added in HDFS-2002 only works in cases where threshold < 1, but not > when threshold = 1. There is a '+1' added to the computation because of this > assumption. I propose that instead of adding a '+1', we just set > blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to > be >= blockThreshold to get out of safe mode, this will work. >14.9 is the > same as =15, since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold = 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger reassigned HDFS-10459: -- Assignee: Eric Badger > getTurnOffTip computes needed block incorrectly for threshold = 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.22.0, 0.23.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-10459.001.patch > > > The fix added in HDFS-2002 only works in cases where threshold < 1, but not > when threshold = 1. There is a '+1' added to the computation because of this > assumption. I propose that instead of adding a '+1', we just set > blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to > be >= blockThreshold to get out of safe mode, this will work. >14.9 is the > same as =15, since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold = 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Attachment: HDFS-10459.001.patch Attaching a patch that fixes the issue by making blockThreshold equal to the ceiling of blockTotal*threshold > getTurnOffTip computes needed block incorrectly for threshold = 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Eric Badger > Attachments: HDFS-10459.001.patch > > > The fix added in HDFS-2002 only works in cases where threshold < 1, but not > when threshold = 1. There is a '+1' added to the computation because of this > assumption. I propose that instead of adding a '+1', we just set > blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to > be >= blockThreshold to get out of safe mode, this will work. >14.9 is the > same as =15, since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold = 1
[ https://issues.apache.org/jira/browse/HDFS-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HDFS-10459: --- Affects Version/s: 0.22.0 0.23.0 Status: Patch Available (was: Open) > getTurnOffTip computes needed block incorrectly for threshold = 1 > - > > Key: HDFS-10459 > URL: https://issues.apache.org/jira/browse/HDFS-10459 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 0.22.0 >Reporter: Eric Badger > Attachments: HDFS-10459.001.patch > > > The fix added in HDFS-2002 only works in cases where threshold < 1, but not > when threshold = 1. There is a '+1' added to the computation because of this > assumption. I propose that instead of adding a '+1', we just set > blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to > be >= blockThreshold to get out of safe mode, this will work. >14.9 is the > same as =15, since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-10459) getTurnOffTip computes needed block incorrectly for threshold = 1
Eric Badger created HDFS-10459: -- Summary: getTurnOffTip computes needed block incorrectly for threshold = 1 Key: HDFS-10459 URL: https://issues.apache.org/jira/browse/HDFS-10459 Project: Hadoop HDFS Issue Type: Bug Reporter: Eric Badger The fix added in HDFS-2002 only works in cases where threshold < 1, but not when threshold = 1. There is a '+1' added to the computation because of this assumption. I propose that instead of adding a '+1', we just set blockThreshold to be the ceiling of blocksTotal*threshold. Since we need to be >= blockThreshold to get out of safe mode, this will work. >14.9 is the same as =15, since blocks work in integer values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access
[ https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300256#comment-15300256 ] Daryn Sharp commented on HDFS-9924: --- I'm late to the game due to time constraints, but this feature greatly concerns me. It's true the NN can handle over 100k ops/sec but only with a read-dominated workload. Even then, I've had to do _a lot_ of internal (hopefully soon to be published) performance work to prevent blowing the heap under such a sustained load - recent user pushed a NN to 90k ops/sec for most of weekend and barely dented the heap. BUT it was 81% read ops. In the past that would have been a 8-10 min GC. I digress. More on point: The intended use case is for mass write operations. Consider this: on multiple large clusters, offloading just a few thousands write ops/sec for log aggregation reduced 95th ptile processing time from 4ms to <.5ms and queue time from 20ms to 4ms. The extremely wild variance in the metrics also stabilized. I've already been having performance concerns with hive's mass setOwner/setPermission which I believe is single-threaded. This feature appears intended for hive. I'm really hesitant for a feature that makes it trivial to destroy a NN. > [umbrella] Asynchronous HDFS Access > --- > > Key: HDFS-9924 > URL: https://issues.apache.org/jira/browse/HDFS-9924 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: AsyncHdfs20160510.pdf > > > This is an umbrella JIRA for supporting Asynchronous HDFS Access. > Currently, all the API methods are blocking calls -- the caller is blocked > until the method returns. It is very slow if a client makes a large number > of independent calls in a single thread since each call has to wait until the > previous call is finished. It is inefficient if a client needs to create a > large number of threads to invoke the calls. > We propose adding a new API to support asynchronous calls, i.e. the caller is > not blocked. The methods in the new API immediately return a Java Future > object. The return value can be obtained by the usual Future.get() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster
[ https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10458: - Attachment: HDFS-10458.00.patch {{trunk}} already has a method to get number of EZs without locking. Attaching initial version of patch. > getFileEncryptionInfo should return quickly for non-encrypted cluster > - > > Key: HDFS-10458 > URL: https://issues.apache.org/jira/browse/HDFS-10458 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-10458.00.patch > > > {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks > if the path belongs to an EZ. For a busy system with potentially many listing > operations, this could cause locking contention. > I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to > return whether the system has any EZ. If no EZ at all, > {{getFileEncryptionInfo}} should return null without {{readLock}}. > If {{hasEncryptionZone}} is only used in the above scenario, maybe itself > doesn't need a {{readLock}} -- if the system doesn't have any EZ when > {{getFileEncryptionInfo}} is called on a path, it means the path cannot be > encrypted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10458) getFileEncryptionInfo should return quickly for non-encrypted cluster
[ https://issues.apache.org/jira/browse/HDFS-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-10458: - Status: Patch Available (was: Open) > getFileEncryptionInfo should return quickly for non-encrypted cluster > - > > Key: HDFS-10458 > URL: https://issues.apache.org/jira/browse/HDFS-10458 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Zhe Zhang >Assignee: Zhe Zhang > > {{FSDirectory#getFileEncryptionInfo}} always acquires {{readLock}} and checks > if the path belongs to an EZ. For a busy system with potentially many listing > operations, this could cause locking contention. > I think we should add a call {{EncryptionZoneManager#hasEncryptionZone()}} to > return whether the system has any EZ. If no EZ at all, > {{getFileEncryptionInfo}} should return null without {{readLock}}. > If {{hasEncryptionZone}} is only used in the above scenario, maybe itself > doesn't need a {{readLock}} -- if the system doesn't have any EZ when > {{getFileEncryptionInfo}} is called on a path, it means the path cannot be > encrypted. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300129#comment-15300129 ] Rakesh R commented on HDFS-9833: Attached new patch fixing checkstyle warnings. > Erasure coding: recomputing block checksum on the fly by reconstructing the > missed/corrupt block data > - > > Key: HDFS-9833 > URL: https://issues.apache.org/jira/browse/HDFS-9833 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rakesh R > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch, > HDFS-9833-02.patch, HDFS-9833-03.patch, HDFS-9833-04.patch, > HDFS-9833-05.patch, HDFS-9833-06.patch > > > As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum > even some of striped blocks are missed, we need to consider recomputing block > checksum on the fly for the missed/corrupt blocks. To recompute the block > checksum, the block data needs to be reconstructed by erasure decoding, and > the main needed codes for the block reconstruction could be borrowed from > HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC > worker, reconstructed blocks need to be written out to target datanodes, but > here in this case, the remote writing isn't necessary, as the reconstructed > block data is only used to recompute the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-9833: --- Attachment: HDFS-9833-06.patch > Erasure coding: recomputing block checksum on the fly by reconstructing the > missed/corrupt block data > - > > Key: HDFS-9833 > URL: https://issues.apache.org/jira/browse/HDFS-9833 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rakesh R > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch, > HDFS-9833-02.patch, HDFS-9833-03.patch, HDFS-9833-04.patch, > HDFS-9833-05.patch, HDFS-9833-06.patch > > > As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum > even some of striped blocks are missed, we need to consider recomputing block > checksum on the fly for the missed/corrupt blocks. To recompute the block > checksum, the block data needs to be reconstructed by erasure decoding, and > the main needed codes for the block reconstruction could be borrowed from > HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC > worker, reconstructed blocks need to be written out to target datanodes, but > here in this case, the remote writing isn't necessary, as the reconstructed > block data is only used to recompute the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics
[ https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300120#comment-15300120 ] Rakesh R commented on HDFS-10434: - Test case failures are not related to the patch, please ignore it. > Fix intermittent test failure of TestDataNodeErasureCodingMetrics > - > > Key: HDFS-10434 > URL: https://issues.apache.org/jira/browse/HDFS-10434 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-10434-00.patch, HDFS-10434-01.patch > > > This jira is to fix the test case failure. > Reference : > [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/] > {code} > Error Message > Bad value for metric EcReconstructionTasks expected:<1> but was:<0> > Stacktrace > java.lang.AssertionError: Bad value for metric EcReconstructionTasks > expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300065#comment-15300065 ] Hadoop QA commented on HDFS-9833: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s {color} | {color:red} hadoop-hdfs-project: patch generated 3 new + 118 unchanged - 0 fixed = 121 total (was 118) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 34s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 50s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806119/HDFS-9833-05.patch | | JIRA Issue | HDFS-9833 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 6b2479e80ca6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dcbb700 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/15559/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/15559/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | unit test logs |
[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1521#comment-1521 ] Rakesh R commented on HDFS-9833: Thanks a lot [~drankye] for reviewing the patch and offline discussions. I've uploaded new patch addressing the comments except 2nd point. Also, added new test case to verifiy checksum after node decommissioning(block locations will be duplicated after decommn operation). bq. HashMap here might be little heavy, an array should work instead. Checksum logic is using {{namenode.getBlockLocations(src, start, length)}} to get the block locations. This list is not guaranteeing any order and also list contains duplicated block info(index and its source node). Now, while computing the block checksum it needs to skip the block which is already considered previously. With {{HashMap}} all these cases will be handled internally(removes duplicate index and maintains ascending order), I feel this makes the logic simple. Also, this hashmap is used locally and contains only very few entries. With array, we need to add extra logic to skip the duplicate nodes and may need to add sorting logic. Whats your opinion to use existing hashmap? Below is sample block indices list after the decommissioning operation. {{'}} represents decommissioned node index. Here, this list contains duplicated blocks and not maintaining any order. {code} 0, 2, 3, 4, 5, 6, 7, 8, 1, 1' {code} > Erasure coding: recomputing block checksum on the fly by reconstructing the > missed/corrupt block data > - > > Key: HDFS-9833 > URL: https://issues.apache.org/jira/browse/HDFS-9833 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rakesh R > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch, > HDFS-9833-02.patch, HDFS-9833-03.patch, HDFS-9833-04.patch, HDFS-9833-05.patch > > > As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum > even some of striped blocks are missed, we need to consider recomputing block > checksum on the fly for the missed/corrupt blocks. To recompute the block > checksum, the block data needs to be reconstructed by erasure decoding, and > the main needed codes for the block reconstruction could be borrowed from > HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC > worker, reconstructed blocks need to be written out to target datanodes, but > here in this case, the remote writing isn't necessary, as the reconstructed > block data is only used to recompute the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data
[ https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-9833: --- Attachment: HDFS-9833-05.patch > Erasure coding: recomputing block checksum on the fly by reconstructing the > missed/corrupt block data > - > > Key: HDFS-9833 > URL: https://issues.apache.org/jira/browse/HDFS-9833 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Rakesh R > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch, > HDFS-9833-02.patch, HDFS-9833-03.patch, HDFS-9833-04.patch, HDFS-9833-05.patch > > > As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum > even some of striped blocks are missed, we need to consider recomputing block > checksum on the fly for the missed/corrupt blocks. To recompute the block > checksum, the block data needs to be reconstructed by erasure decoding, and > the main needed codes for the block reconstruction could be borrowed from > HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC > worker, reconstructed blocks need to be written out to target datanodes, but > here in this case, the remote writing isn't necessary, as the reconstructed > block data is only used to recompute the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org