[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components
[ https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848627#comment-16848627 ] Takanobu Asanuma commented on HDFS-14508: - Thanks for your comment, [~crh]. Make sense. Let's reimplement {{isSecurityEnabled}} and {{getSafemode}} etc in {{FederationMetrics}} and use them in Router UI. I'm still wondering if we can remove {{NamenoeBeanMetrics}}. Hi [~elgoiri], I also want to hear your opinion since you almost implemented it. > RBF: Clean-up and refactor UI components > > > Key: HDFS-14508 > URL: https://issues.apache.org/jira/browse/HDFS-14508 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: Takanobu Asanuma >Priority: Minor > > Router UI has tags that are not used or incorrectly set. The code should be > cleaned-up. > One such example is > Path : > (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js) > {code:java} > {"name": "routerstat", "url": > "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1454) GC other system pause events can trigger pipeline destroy for all the nodes in the cluster
[ https://issues.apache.org/jira/browse/HDDS-1454?focusedWorklogId=248635=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248635 ] ASF GitHub Bot logged work on HDDS-1454: Author: ASF GitHub Bot Created on: 27/May/19 05:32 Start Date: 27/May/19 05:32 Worklog Time Spent: 10m Work Description: supratimdeka commented on pull request #852: HDDS-1454. GC other system pause events can trigger pipeline destroy for all the nodes in the cluster. Contributed by Supratim Deka URL: https://github.com/apache/hadoop/pull/852 https://issues.apache.org/jira/browse/HDDS-1454 Problem: In a MiniOzoneChaosCluster run it was observed that events like GC pauses or any other pauses in SCM can mark all the datanodes as stale in SCM. This will trigger multiple pipeline destroy and will render the system unusable. Solution: Added a timestamp check in NodeStateManager. If the heartbeat task detects a long scheduling delay since the last time it ran, then the task skips doing health checks and node state transitions in the current iteration. Test: The unit test simulates a JVM pause by simply pausing the iterations of the health check task. Once the health check task is "unpaused", the system condition will be similar to a JVM pause. The test asserts that any node with heartbeats should not transition to Stale or Dead after such a long delay in scheduling. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 248635) Time Spent: 10m Remaining Estimate: 0h > GC other system pause events can trigger pipeline destroy for all the nodes > in the cluster > -- > > Key: HDDS-1454 > URL: https://issues.apache.org/jira/browse/HDDS-1454 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Mukul Kumar Singh >Assignee: Supratim Deka >Priority: Major > Labels: MiniOzoneChaosCluster, pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In a MiniOzoneChaosCluster run it was observed that events like GC pauses or > any other pauses in SCM can mark all the datanodes as stale in SCM. This will > trigger multiple pipeline destroy and will render the system unusable. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1454) GC other system pause events can trigger pipeline destroy for all the nodes in the cluster
[ https://issues.apache.org/jira/browse/HDDS-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-1454: - Labels: MiniOzoneChaosCluster pull-request-available (was: MiniOzoneChaosCluster) > GC other system pause events can trigger pipeline destroy for all the nodes > in the cluster > -- > > Key: HDDS-1454 > URL: https://issues.apache.org/jira/browse/HDDS-1454 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Mukul Kumar Singh >Assignee: Supratim Deka >Priority: Major > Labels: MiniOzoneChaosCluster, pull-request-available > > In a MiniOzoneChaosCluster run it was observed that events like GC pauses or > any other pauses in SCM can mark all the datanodes as stale in SCM. This will > trigger multiple pipeline destroy and will render the system unusable. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components
[ https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848609#comment-16848609 ] CR Hota commented on HDFS-14508: [~tasanuma] Thanks for starting the thread. Its better to have a routerbeanmetrics that shares info about router itself version, status, security etc and have one for federation, which is for the whole federation as the name suggests. The reason I am looking at a new bean dedicated for router is that Namenode bean has methods specific to namenodes (getSlowPeersReport, getSlowDisksReport etc etc ) that isn't applicable to routers and also in the future if any change is done (for ex adding a new metric specific to namenode) then router also has to dummy implement it. It's better to de-couple. Though they look same to API users, routers and namenodes are fundamentally quite different. > RBF: Clean-up and refactor UI components > > > Key: HDFS-14508 > URL: https://issues.apache.org/jira/browse/HDFS-14508 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: Takanobu Asanuma >Priority: Minor > > Router UI has tags that are not used or incorrectly set. The code should be > cleaned-up. > One such example is > Path : > (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js) > {code:java} > {"name": "routerstat", "url": > "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-1580) Obtain Handler reference in ContainerScrubber
[ https://issues.apache.org/jira/browse/HDDS-1580?focusedWorklogId=248607=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-248607 ] ASF GitHub Bot logged work on HDDS-1580: Author: ASF GitHub Bot Created on: 27/May/19 04:22 Start Date: 27/May/19 04:22 Worklog Time Spent: 10m Work Description: shwetayakkali commented on pull request #842: HDDS-1580.Obtain Handler reference in ContainerScrubber URL: https://github.com/apache/hadoop/pull/842#discussion_r287642534 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerScrubber.java ## @@ -130,12 +128,13 @@ private void throttleScrubber(TimeStamp startTime) { private void scrub() { -Iterator containerIt = containerSet.getContainerIterator(); +Iterator containerIt = controller.getContainerSetIterator(); long count = 0; while (containerIt.hasNext()) { TimeStamp startTime = new TimeStamp(System.currentTimeMillis()); Container container = containerIt.next(); + Handler containerHandler = controller.getHandler(container); Review comment: Yes @hanishakoneru it is for later use in the Directory scanner part. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 248607) Time Spent: 1h (was: 50m) > Obtain Handler reference in ContainerScrubber > - > > Key: HDDS-1580 > URL: https://issues.apache.org/jira/browse/HDDS-1580 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Shweta >Assignee: Shweta >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Obtain reference to Handler based on containerType in scrub() in > ContainerScrubber.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14475) RBF: Expose router security enabled status on the UI
[ https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848594#comment-16848594 ] Takanobu Asanuma edited comment on HDFS-14475 at 5/27/19 3:50 AM: -- Thanks for sharing your thought, [~crh]. That makes sense to me. I will start the discussion in the refactoring jira. was (Author: tasanuma0829): Thanks for sharing your thought, [~crh]. I will start the discussion in the refactoring jira. > RBF: Expose router security enabled status on the UI > > > Key: HDFS-14475 > URL: https://issues.apache.org/jira/browse/HDFS-14475 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Attachments: HDFS-14475-HDFS-13891.001.patch, > HDFS-14475-HDFS-13891.002.patch > > > This is a branched off Jira to expose metric so that router's security status > can be displayed on the UI. We are still unclear if more work needs to be > done for dealing with CORS etc. > https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components
[ https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848597#comment-16848597 ] Takanobu Asanuma commented on HDFS-14508: - This is [~crh]'s comment in HDFS-14475: {quote} Fundamentally since router is becoming more intelligent, it may be a good idea to expose metrics separately and not necessarily implement NamenodeBeanMetrics which has methods specific to namenode and not applicable to Routers. {quote} In addition to that, {{FederationMetrics}} covers most of the metrics of {{NamenodeBeanMetrics}} at the moment. Given that, can we put them together into {{FederationMetrics}} and remove {{NamenodeBeanMetrics}}? Or is there any reason that we should keep {{NamenoeBeanMetrics}}? > RBF: Clean-up and refactor UI components > > > Key: HDFS-14508 > URL: https://issues.apache.org/jira/browse/HDFS-14508 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: Takanobu Asanuma >Priority: Minor > > Router UI has tags that are not used or incorrectly set. The code should be > cleaned-up. > One such example is > Path : > (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js) > {code:java} > {"name": "routerstat", "url": > "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI
[ https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848594#comment-16848594 ] Takanobu Asanuma commented on HDFS-14475: - Thanks for sharing your thought, [~crh]. I will start the discussion in the refactoring jira. > RBF: Expose router security enabled status on the UI > > > Key: HDFS-14475 > URL: https://issues.apache.org/jira/browse/HDFS-14475 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: CR Hota >Assignee: CR Hota >Priority: Major > Attachments: HDFS-14475-HDFS-13891.001.patch, > HDFS-14475-HDFS-13891.002.patch > > > This is a branched off Jira to expose metric so that router's security status > can be displayed on the UI. We are still unclear if more work needs to be > done for dealing with CORS etc. > https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)
[ https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848561#comment-16848561 ] Shen Yinjie commented on HDFS-14512: [~ayushtkn] and [~jojochuang] Thanks for your comment! Yes, I had two disks ,one for DISK and one for SSD It is used by HBase to catch the best regionservers > ONE_SSD policy will be violated while write data with > DistributedFileSystem.create(favoredNodes) > > > Key: HDFS-14512 > URL: https://issues.apache.org/jira/browse/HDFS-14512 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shen Yinjie >Priority: Major > > Reproduce steps: > 1.setStoragePolicy ONE_SSD for a path A; > 2. client write data to path A by > DistributedFileSystem.create(...favoredNodes) and Passing parameter > favoredNodes > then, three replicas of file in this path will be located in 2 SSD and > 1DISK,which is violating the ONE_SSD policy. > Not sure am I clear? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
[ https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848432#comment-16848432 ] Hadoop QA commented on HDFS-14513: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 2m 2s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e | | JIRA Issue | HDFS-14513 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12969809/HDFS-14513.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux de1b608ffcc8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 37900c5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_212 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/26845/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Updated] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
[ https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HDFS-14513: --- Attachment: (was: HDFS-14513.001.patch) > FSImage which is saving should be clean while NameNode shutdown > --- > > Key: HDFS-14513 > URL: https://issues.apache.org/jira/browse/HDFS-14513 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-14513.001.patch > > > FSImage checkpoint file which is saving could not be clean while NameNode > shutdown at the same time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
[ https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HDFS-14513: --- Attachment: HDFS-14513.001.patch Status: Patch Available (was: Open) > FSImage which is saving should be clean while NameNode shutdown > --- > > Key: HDFS-14513 > URL: https://issues.apache.org/jira/browse/HDFS-14513 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-14513.001.patch > > > FSImage checkpoint file which is saving could not be clean while NameNode > shutdown at the same time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
[ https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848392#comment-16848392 ] He Xiaoqiao commented on HDFS-14513: [^HDFS-14513.001.patch] Add hook for FSImageSaver and clean checkpointing fsimage file. > FSImage which is saving should be clean while NameNode shutdown > --- > > Key: HDFS-14513 > URL: https://issues.apache.org/jira/browse/HDFS-14513 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-14513.001.patch > > > FSImage checkpoint file which is saving could not be clean while NameNode > shutdown at the same time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
[ https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Xiaoqiao updated HDFS-14513: --- Attachment: HDFS-14513.001.patch > FSImage which is saving should be clean while NameNode shutdown > --- > > Key: HDFS-14513 > URL: https://issues.apache.org/jira/browse/HDFS-14513 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Attachments: HDFS-14513.001.patch > > > FSImage checkpoint file which is saving could not be clean while NameNode > shutdown at the same time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown
He Xiaoqiao created HDFS-14513: -- Summary: FSImage which is saving should be clean while NameNode shutdown Key: HDFS-14513 URL: https://issues.apache.org/jira/browse/HDFS-14513 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: He Xiaoqiao Assignee: He Xiaoqiao FSImage checkpoint file which is saving could not be clean while NameNode shutdown at the same time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache
[ https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848383#comment-16848383 ] Hudson commented on HDFS-14402: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16605 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/16605/]) HDFS-14402. Use FileChannel.transferTo() method for transferring block (rakeshr: rev 37900c5639f8ba8d41b9fedc3d41ee0fbda7d5db) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlockLoader.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MemoryMappableBlockLoader.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/PmemMappableBlockLoader.java > Use FileChannel.transferTo() method for transferring block to SCM cache > --- > > Key: HDFS-14402 > URL: https://issues.apache.org/jira/browse/HDFS-14402 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Labels: SCM > Fix For: 3.3.0 > > Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, > HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, > Without-Cache-Improvement-Patch.png > > > We will consider to use transferTo API to improve SCM's cach performace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache
[ https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-14402: Labels: SCM (was: ) > Use FileChannel.transferTo() method for transferring block to SCM cache > --- > > Key: HDFS-14402 > URL: https://issues.apache.org/jira/browse/HDFS-14402 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Labels: SCM > Fix For: 3.3.0 > > Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, > HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, > Without-Cache-Improvement-Patch.png > > > We will consider to use transferTo API to improve SCM's cach performace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache
[ https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-14402: Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) I have committed latest patch to trunk. Thanks [~PhiloHe] for the contribution! > Use FileChannel.transferTo() method for transferring block to SCM cache > --- > > Key: HDFS-14402 > URL: https://issues.apache.org/jira/browse/HDFS-14402 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, > HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, > Without-Cache-Improvement-Patch.png > > > We will consider to use transferTo API to improve SCM's cach performace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache
[ https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848372#comment-16848372 ] Rakesh R commented on HDFS-14402: - Thank you [~PhiloHe] for the patch. The performance result looks promising and shows time reduction. Probably, can plan test comparison between HDD and NVMe devices, that would be interesting and can attach to umbrella Jira as well. +1 LGTM, I will commit the latest patch shortly. > Use FileChannel.transferTo() method for transferring block to SCM cache > --- > > Key: HDFS-14402 > URL: https://issues.apache.org/jira/browse/HDFS-14402 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, > HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, > Without-Cache-Improvement-Patch.png > > > We will consider to use transferTo API to improve SCM's cach performace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache
[ https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846547#comment-16846547 ] Feilong He edited comment on HDFS-14402 at 5/26/19 7:23 AM: I have conducted a simple test that caching 10*200MB files on a single node. For the current trunk without this Jira's optimization, the IO metric is shown as below. !Without-Cache-Improvement-Patch.png! For applying this Jira's patch ([^HDFS-14402.002.patch]), the IO metric is shown as below. !With-Cache-Improvement-Patch.png! Please note that 25s, 11s are the respective cache exectution time approximately, which is a visible improvement on cache performance. was (Author: philohe): I have conducted a simple test that caching 10*200MB files on a single node. For the current trunk without this Jira's optimization, the IO metric is shown as below. !Without-Cache-Improvement-Patch.png! For applying this Jira's patch ([^HDFS-14402.002.patch]), the IO metric is shown as below. !With-Cache-Improvement-Patch.png! ^You can note that this Jira's patch can visibly improve the caching performance on persistent memory.^ > Use FileChannel.transferTo() method for transferring block to SCM cache > --- > > Key: HDFS-14402 > URL: https://issues.apache.org/jira/browse/HDFS-14402 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: caching, datanode >Reporter: Feilong He >Assignee: Feilong He >Priority: Major > Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, > HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, > Without-Cache-Improvement-Patch.png > > > We will consider to use transferTo API to improve SCM's cach performace. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org