[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566382#comment-16566382 ] Nanda kumar commented on HDDS-310: -- +1, LGTM. saveVolumeSetUsed can be made private, will fix this while committing. > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch, HDDS-310.01.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-313) Add metrics to containerState Machine
Mukul Kumar Singh created HDDS-313: -- Summary: Add metrics to containerState Machine Key: HDDS-313 URL: https://issues.apache.org/jira/browse/HDDS-313 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Mukul Kumar Singh metrics needs to be added to containerStateMachine to keep track of various ratis ops like writeStateMachine/readStateMachine/applyTransactions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-312) Add blockIterator to Container
[ https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-312: Fix Version/s: 0.2.1 > Add blockIterator to Container > -- > > Key: HDDS-312 > URL: https://issues.apache.org/jira/browse/HDDS-312 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > > This Jira is to add newly added blockIterator to Container and its > implemented class KeyValueContainer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-312) Add blockIterator to Container
Bharat Viswanadham created HDDS-312: --- Summary: Add blockIterator to Container Key: HDDS-312 URL: https://issues.apache.org/jira/browse/HDDS-312 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham This Jira is to add newly added blockIterator to Container and its implemented class KeyValueContainer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566349#comment-16566349 ] Bharat Viswanadham commented on HDDS-310: - Hi [~nandakumar131] Thanks for the review. Agreed with comments, removed the usage of saveVolumeSetUsed. > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch, HDDS-310.01.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-310: Attachment: HDDS-310.01.patch > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch, HDDS-310.01.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13784) Metrics sampling period is milliseconds instead of seconds。
[ https://issues.apache.org/jira/browse/HDFS-13784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566333#comment-16566333 ] genericqa commented on HDFS-13784: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestTrash | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13784 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12934001/HDFS-13784.patch | | Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall shadedclient findbugs checkstyle | | uname | Linux bbbf5f1edfca 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 23f3942 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/24685/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/24685/testReport/ | | Max. process+thread count | 1430 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers
[ https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566307#comment-16566307 ] Mukul Kumar Singh commented on HDDS-298: Thanks for updating the patch and raising the followup jira [~ajayydv]. The current patch looks really good to me. I have a minor comment on ContainerMapping:217. Rest the patch looks good to me. 1) ContainerMapping:217.:I feel we should not use the earlier pipeline name once the container has been closed. so pipeline name can be changed to `String name = CLOSE_PIPELINE_PREFIX + contInfo.containerID()`; 2) TestContainerMapping:185: there are commented lines in the code > Implement SCMClientProtocolServer.getContainerWithPipeline for closed > containers > > > Key: HDDS-298 > URL: https://issues.apache.org/jira/browse/HDDS-298 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Ajay Kumar >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch > > > As [~ljain] mentioned during the review of HDDS-245 > SCMClientProtocolServer.getContainerWithPipeline doesn't return with good > data for closed containers. For closed containers we are maintaining the > datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need > to create fake Pipeline object on-request and return it for the client to > locate the right datanodes to download data. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-13084) [SPS]: Fix the branch review comments
[ https://issues.apache.org/jira/browse/HDFS-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R resolved HDFS-13084. - Resolution: Fixed Fix Version/s: HDFS-10285 I'm closing this issue as {{IntraSPSNameNodeContext}} code implementation specifically for the internal SPS service has been removed from this branch. Internal SPS mechanism will be discussed and supported via the follow-up Jira task HDFS-12226. We have taken care the comments related to this branch via HDFS-13097, HDFS-13110, HDFS-13166, HDFS-13381 Jira sub-tasks. > [SPS]: Fix the branch review comments > - > > Key: HDFS-13084 > URL: https://issues.apache.org/jira/browse/HDFS-13084 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Rakesh R >Priority: Major > Fix For: HDFS-10285 > > > Fix the review comments provided by [~daryn] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13523) Support observer nodes in MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566256#comment-16566256 ] Konstantin Shvachko commented on HDFS-13523: Agree with [~xkrogen] overriding the {{StartupOption}} is a bad idea. Just using {{StartupOption.OBSERVER}} as the parameter for {{createNameNode()}} should start an Observer NameNode. It does not require changes in {{MiniDFSNNTopology}} to specify the Observer. The main purpose of this jira is to provide methods in {{MiniDFSCluster}} to support Observer NameNode. # Startup. In HA setup (where we get SBNs and now Observers) NameNodes start in Standby state, then one of them is transitioned into Active. Same with Observer. It starts in Standby state then transitions to Obersver. {{transitionToStandby()}} has been introduced in one of the prior jiras. You don't need to worry about startup. See {{TestObserverNode}}. # So more specific goal is to look at {{TestObserverNode}} and {{TestStateAlignmentContextWithHA}} and see what common methods could be added to {{MiniDFSCluster}} or to {{HATestUtil}}, to simplify starting and manipulating mini clusters with observer nodes. Examples include: ** Moving {{setObserverRead()}} into {{MiniDFSCluster}} ** Providing methods like {{setUpCluster(int numObservers)}} to setup a cluster with given number of observers. > Support observer nodes in MiniDFSCluster > > > Key: HDFS-13523 > URL: https://issues.apache.org/jira/browse/HDFS-13523 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode, test >Reporter: Erik Krogen >Assignee: Sherwood Zheng >Priority: Major > Attachments: HADOOP-13523-HADOOP-12943.000.patch, > HADOOP-13523-HADOOP-12943.001.patch, HDFS-13523-HDFS-12943.001.patch, > HDFS-13523-HDFS-12943.002.patch, HDFS-13523-HDFS-12943.003.patch, > HDFS-13523-HDFS-12943.004.patch > > > MiniDFSCluster should support Observer nodes so that we can write decent > integration tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13784) Metrics sampling period is milliseconds instead of seconds。
[ https://issues.apache.org/jira/browse/HDFS-13784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDFS-13784: --- Status: Patch Available (was: Open) > Metrics sampling period is milliseconds instead of seconds。 > --- > > Key: HDFS-13784 > URL: https://issues.apache.org/jira/browse/HDFS-13784 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: chencan >Priority: Minor > Attachments: HDFS-13784.patch > > > Metrics sampling period is milliseconds instead of seconds,this patch modify > the related configuration file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13784) Metrics sampling period is milliseconds instead of seconds。
[ https://issues.apache.org/jira/browse/HDFS-13784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chencan updated HDFS-13784: --- Attachment: HDFS-13784.patch > Metrics sampling period is milliseconds instead of seconds。 > --- > > Key: HDFS-13784 > URL: https://issues.apache.org/jira/browse/HDFS-13784 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: chencan >Priority: Minor > Attachments: HDFS-13784.patch > > > Metrics sampling period is milliseconds instead of seconds,this patch modify > the related configuration file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13784) Metrics sampling period is milliseconds instead of seconds。
chencan created HDFS-13784: -- Summary: Metrics sampling period is milliseconds instead of seconds。 Key: HDFS-13784 URL: https://issues.apache.org/jira/browse/HDFS-13784 Project: Hadoop HDFS Issue Type: Bug Reporter: chencan Metrics sampling period is milliseconds instead of seconds,this patch modify the related configuration file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.
maobaolong created HDFS-13783: - Summary: Balancer: make balancer to be a long service process for easy to monitor it. Key: HDFS-13783 URL: https://issues.apache.org/jira/browse/HDFS-13783 Project: Hadoop HDFS Issue Type: New Feature Components: balancer mover Affects Versions: 3.0.3 Reporter: maobaolong If we have a long service process of balancer, like namenode, datanode, we can get metrics of balancer, the metrics can tell us the status of balancer, the amount of block it has moved, We can get or set the balance plan by the balancer webUI. So many things we can do if we have a long balancer service process. So, shall we start to plan the new Balancer? Hope this feature can enter the next release of hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication
[ https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-230: --- Summary: ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication (was: Ozone Datanode exits during data write through Ratis) > ContainerStateMachine should provide readStateMachineData api to read data if > Containers with required during replication > - > > Key: HDDS-230 > URL: https://issues.apache.org/jira/browse/HDDS-230 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-230.001.patch, HDDS-230.002.patch > > > Ozone datanode exits during data write with the following exception. > {code} > 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: > Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index > to:4565 > 2018-07-05 14:10:01,607 ERROR > org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit > status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the > StateMachineUpdater hits Throwable > java.lang.NullPointerException > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272) > at > org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058) > at > org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154) > at java.lang.Thread.run(Thread.java:745) > {code} > This might be as a result of a ratis transaction which was not written > through the "writeStateMachineData" phase, however it was added to the raft > log. This implied that stateMachineUpdater now applies a transaction without > the corresponding entry being added to the stateMachine. > I am raising this jira to track the issue and will also raise a Ratis jira if > required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication
[ https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-230: --- Status: Patch Available (was: Open) > ContainerStateMachine should provide readStateMachineData api to read data if > Containers with required during replication > - > > Key: HDDS-230 > URL: https://issues.apache.org/jira/browse/HDDS-230 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-230.001.patch, HDDS-230.002.patch > > > Ozone datanode exits during data write with the following exception. > {code} > 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: > Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index > to:4565 > 2018-07-05 14:10:01,607 ERROR > org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit > status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the > StateMachineUpdater hits Throwable > java.lang.NullPointerException > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272) > at > org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058) > at > org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154) > at java.lang.Thread.run(Thread.java:745) > {code} > This might be as a result of a ratis transaction which was not written > through the "writeStateMachineData" phase, however it was added to the raft > log. This implied that stateMachineUpdater now applies a transaction without > the corresponding entry being added to the stateMachine. > I am raising this jira to track the issue and will also raise a Ratis jira if > required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-230) Ozone Datanode exits during data write through Ratis
[ https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-230: --- Attachment: HDDS-230.002.patch > Ozone Datanode exits during data write through Ratis > > > Key: HDDS-230 > URL: https://issues.apache.org/jira/browse/HDDS-230 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-230.001.patch, HDDS-230.002.patch > > > Ozone datanode exits during data write with the following exception. > {code} > 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: > Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index > to:4565 > 2018-07-05 14:10:01,607 ERROR > org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit > status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the > StateMachineUpdater hits Throwable > java.lang.NullPointerException > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272) > at > org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058) > at > org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154) > at java.lang.Thread.run(Thread.java:745) > {code} > This might be as a result of a ratis transaction which was not written > through the "writeStateMachineData" phase, however it was added to the raft > log. This implied that stateMachineUpdater now applies a transaction without > the corresponding entry being added to the stateMachine. > I am raising this jira to track the issue and will also raise a Ratis jira if > required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13735) Make QJM HTTP URL connection timeout configurable
[ https://issues.apache.org/jira/browse/HDFS-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566200#comment-16566200 ] Konstantin Shvachko commented on HDFS-13735: +1. Let's move on. I think this should go into trunk. > Make QJM HTTP URL connection timeout configurable > - > > Key: HDFS-13735 > URL: https://issues.apache.org/jira/browse/HDFS-13735 > Project: Hadoop HDFS > Issue Type: Improvement > Components: qjm >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Minor > Attachments: HDFS-13735.000.patch, HDFS-13735.001.patch > > > We've seen "connect timed out" happen internally when QJM tries to open HTTP > connections to JNs. This is now using {{newDefaultURLConnectionFactory}} > which uses the default timeout 60s, and is not configurable. > It would be better for this to be configurable, especially for > ObserverNameNode (HDFS-12943), where latency is important, and 60s may not be > a good value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566196#comment-16566196 ] Konstantin Shvachko commented on HDFS-13767: Looks good. Comments are good, but look through all of them and check the spelling or phrasing. # I suggest this comment change: {code} // The call processing should be postponed until the client's // state id is aligned (>=) with the server state id. For ACTIVE, // clients always have state id smaller than than the server's, // so this check only applies to STANDBY and OBSERVER. // NOTE: // Inserting the call back to the queue can change the order of call // execution compared to their original placement into the queue. // This is not a problem, because HDFS does not have any constraints // on ordering the incoming rpc requests. // Also, Observer handles only reads, which are commutative. // Re-queue the call and continue internalQueueCall(call); {code} # You can use {{getLastSeenStateId()}} in {{GlobalStateIdContext.updateResponseState()}} to avoid code redundancy. # Also would be good to remove unused import in {{ClientNamenodeProtocolTranslatorPB}} which was introduced by HDFS-13688. > Add msync server implementation. > > > Key: HDFS-13767 > URL: https://issues.apache.org/jira/browse/HDFS-13767 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13767-HDFS-12943.001.patch, > HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, > HDFS-13767.WIP.004.patch > > > This is a followup on HDFS-13688, where msync API is introduced to > {{ClientProtocol}} but the server side implementation is missing. This is > Jira is to implement the server side logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13780) Postpone NameNode state discovery in ObserverReadProxyProvider until the first real RPC call.
[ https://issues.apache.org/jira/browse/HDFS-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13780: --- Description: Currently {{ObserverReadProxyProvider}} during instantiation discovers Observers by poking known NameNodes and checking their states. This rather expensive process can be postponed until the first actual RPC call. This is an optimization. was:Currently {{ObserverReadProxyProvider}} during instantiation discovers Observers by poking known NameNodes and checking their states. This rather expensive process can be postponed until the first actual RPC call. > Postpone NameNode state discovery in ObserverReadProxyProvider until the > first real RPC call. > - > > Key: HDFS-13780 > URL: https://issues.apache.org/jira/browse/HDFS-13780 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Konstantin Shvachko >Priority: Major > > Currently {{ObserverReadProxyProvider}} during instantiation discovers > Observers by poking known NameNodes and checking their states. This rather > expensive process can be postponed until the first actual RPC call. > This is an optimization. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13781) Unit tests for standby reads.
[ https://issues.apache.org/jira/browse/HDFS-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-13781: --- Summary: Unit tests for standby reads. (was: Unit test for standby reads.) > Unit tests for standby reads. > - > > Key: HDFS-13781 > URL: https://issues.apache.org/jira/browse/HDFS-13781 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Konstantin Shvachko >Priority: Major > > Create more unit tests supporting standby reads feature. Let's come up with a > list of tests that provide sufficient test coverage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13782) IPFailoverProxyProvider should work with SBN
Konstantin Shvachko created HDFS-13782: -- Summary: IPFailoverProxyProvider should work with SBN Key: HDFS-13782 URL: https://issues.apache.org/jira/browse/HDFS-13782 Project: Hadoop HDFS Issue Type: Sub-task Components: test Reporter: Konstantin Shvachko Currently {{ObserverReadProxyProvider}} is based on {{ConfiguredFailoverProxyProvider}}. We should also be able perform SBN reads in case of {{IPFailoverProxyProvider}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13781) Unit test for standby reads.
Konstantin Shvachko created HDFS-13781: -- Summary: Unit test for standby reads. Key: HDFS-13781 URL: https://issues.apache.org/jira/browse/HDFS-13781 Project: Hadoop HDFS Issue Type: Sub-task Components: test Reporter: Konstantin Shvachko Create more unit tests supporting standby reads feature. Let's come up with a list of tests that provide sufficient test coverage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13780) Postpone NameNode state discovery in ObserverReadProxyProvider until the first real RPC call.
Konstantin Shvachko created HDFS-13780: -- Summary: Postpone NameNode state discovery in ObserverReadProxyProvider until the first real RPC call. Key: HDFS-13780 URL: https://issues.apache.org/jira/browse/HDFS-13780 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs-client Reporter: Konstantin Shvachko Currently {{ObserverReadProxyProvider}} during instantiation discovers Observers by poking known NameNodes and checking their states. This rather expensive process can be postponed until the first actual RPC call. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13779) Implement performFailover logic for ObserverReadProxyProvider.
Konstantin Shvachko created HDFS-13779: -- Summary: Implement performFailover logic for ObserverReadProxyProvider. Key: HDFS-13779 URL: https://issues.apache.org/jira/browse/HDFS-13779 Project: Hadoop HDFS Issue Type: Sub-task Components: hdfs-client Reporter: Konstantin Shvachko Currently {{ObserverReadProxyProvider}} inherits {{performFailover()}} method from {{ConfiguredFailoverProxyProvider}}, which simply increments the index and switches over to another NameNode. The logic for ORPP should be smart enough to choose another observer, otherwise it can switch to a SBN, where reads are disallowed, or to an ANN, which defeats the purpose of reads from standby. This was discussed in HDFS-12976. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13778) In TestStateAlignmentContextWithHA replace artificial AlignmentContextProxyProvider with real ObserverReadProxyProvider.
Konstantin Shvachko created HDFS-13778: -- Summary: In TestStateAlignmentContextWithHA replace artificial AlignmentContextProxyProvider with real ObserverReadProxyProvider. Key: HDFS-13778 URL: https://issues.apache.org/jira/browse/HDFS-13778 Project: Hadoop HDFS Issue Type: Sub-task Components: test Reporter: Konstantin Shvachko TestStateAlignmentContextWithHA uses an artificial AlignmentContextProxyProvider, which was temporary needed for testing. Now that we have real ObserverReadProxyProvider it can take over ACPP. This is also useful for testing the ORPP. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566108#comment-16566108 ] Ajay Kumar edited comment on HDFS-13532 at 8/1/18 10:38 PM: [~crh], i had a offline discussion with [~jnp], [~xyao] and [~arpitagarwal] on this. If ServiceTicket is cached, Router will not be hammering KDC for each request. If other security experts in community believe that this will be an issue may be we can tweak our approach 1 to mitigate this issue. Attached [RBF-DelegationToken-Approach1b.pdf|https://issues.apache.org/jira/secure/attachment/12933984/RBF-DelegationToken-Approach1b.pdf] to discuss an slightly modified approach. was (Author: ajayydv): [~crh], i had a offline discussion with [~jnp], [~xyao] and [~arpitagarwal] on this. If ServiceTicket is cached, Router will not be hammering KDC for each request. If other security experts in community believe that this will be an issue may be we can tweak our approach 1 to mitigate this issue. Attached [^RBF _ Security delegation token thoughts.pdf] to discuss an slightly modified approach. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, > RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566108#comment-16566108 ] Ajay Kumar commented on HDFS-13532: --- [~crh], i had a offline discussion with [~jnp], [~xyao] and [~arpitagarwal] on this. If ServiceTicket is cached, Router will not be hammering KDC for each request. If other security experts in community believe that this will be an issue may be we can tweak our approach 1 to mitigate this issue. Attached [^RBF _ Security delegation token thoughts.pdf] to discuss an slightly modified approach. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, > RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13532: -- Attachment: RBF-DelegationToken-Approach1b.pdf > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, > RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based > Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566058#comment-16566058 ] genericqa commented on HDFS-13767: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-12943 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 49s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 17s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 6s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 21s{color} | {color:green} HDFS-12943 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} HDFS-12943 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 35s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}257m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.TestCrcCorruption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 | | JIRA Issue | HDFS-13767 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933949/HDFS-13767-HDFS-12943.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4d16584de13d 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers
[ https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565921#comment-16565921 ] genericqa commented on HDDS-298: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-298 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933962/HDDS-298.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a2bf5eb5455d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 603a574 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDDS-Build/681/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/681/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 1) | | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/681/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implement
[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565887#comment-16565887 ] Nanda kumar commented on HDDS-310: -- [~bharatviswa], you don't need {{saveVolumeSetUsed}} variable. Case 1. If someone calls {{shutdown}} method first, we will be executing {{saveVolumeSetUsed()}} used and remove the shutdownHook. In this case, shutdown hook will never get executed. Case 2. No one calls {{shutdown}} method, we will execute shutdown hook which will call {{saveVolumeSetUsed()}}. The issue was with removing shutdown hook inside a shutdown hook call, this is avoided by the introduction of {{saveVolumeSetUsed()}} method. Even without {{saveVolumeSetUsed}} variable, the behavior will be same. > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565869#comment-16565869 ] genericqa commented on HDDS-310: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-310 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933956/HDDS-310.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c17e928562a6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 603a574 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/680/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 1) | | modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/680/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > VolumeSet shutdown hook fails on datanode restart >
[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers
[ https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565864#comment-16565864 ] Ajay Kumar commented on HDDS-298: - [~msingh] patch v2 to change pipeline name to {{"Close-pipline-"}}. Create [HDDS-311] to discuss code cleanup you suggested in other 3 points. > Implement SCMClientProtocolServer.getContainerWithPipeline for closed > containers > > > Key: HDDS-298 > URL: https://issues.apache.org/jira/browse/HDDS-298 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Ajay Kumar >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch > > > As [~ljain] mentioned during the review of HDDS-245 > SCMClientProtocolServer.getContainerWithPipeline doesn't return with good > data for closed containers. For closed containers we are maintaining the > datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need > to create fake Pipeline object on-request and return it for the client to > locate the right datanodes to download data. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-311) Clean up pipeline code in Container mapping.
Ajay Kumar created HDDS-311: --- Summary: Clean up pipeline code in Container mapping. Key: HDDS-311 URL: https://issues.apache.org/jira/browse/HDDS-311 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Ajay Kumar Originally suggested by [~msingh] in HDDS-298. Jira intends to discuss removal of getReplicationPipeline in {{ContainerMapping}} L208, L475,L532 as it creates a new pipeline. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13421: -- Resolution: Fixed Status: Resolved (was: Patch Available) > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers
[ https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDDS-298: Attachment: HDDS-298.02.patch > Implement SCMClientProtocolServer.getContainerWithPipeline for closed > containers > > > Key: HDDS-298 > URL: https://issues.apache.org/jira/browse/HDDS-298 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Elek, Marton >Assignee: Ajay Kumar >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch > > > As [~ljain] mentioned during the review of HDDS-245 > SCMClientProtocolServer.getContainerWithPipeline doesn't return with good > data for closed containers. For closed containers we are maintaining the > datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need > to create fake Pipeline object on-request and return it for the client to > locate the right datanodes to download data. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565848#comment-16565848 ] Virajith Jalaparti commented on HDFS-13421: --- Committed this to the HDFS-12090 branch. Thanks [~ehiggs]. > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565832#comment-16565832 ] Ewan Higgs commented on HDFS-13421: --- HDFS-13777 > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565829#comment-16565829 ] Virajith Jalaparti commented on HDFS-13421: --- Thanks for the updated patch [~ehiggs]. +1 on [^HDFS-13421-HDFS-12090.009.patch] . Can you please link the other JIRA where you are going to add the end-to-end test? > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-310: Status: Patch Available (was: Open) > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDDS-310: Attachment: HDDS-310.00.patch > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-310.00.patch > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13767: -- Attachment: HDFS-13767-HDFS-12943.001.patch > Add msync server implementation. > > > Key: HDFS-13767 > URL: https://issues.apache.org/jira/browse/HDFS-13767 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13767-HDFS-12943.001.patch, > HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, > HDFS-13767.WIP.004.patch > > > This is a followup on HDFS-13688, where msync API is introduced to > {{ClientProtocol}} but the server side implementation is missing. This is > Jira is to implement the server side logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565704#comment-16565704 ] Chen Liang commented on HDFS-13767: --- Post v001 patch to fix the naming of the patch... > Add msync server implementation. > > > Key: HDFS-13767 > URL: https://issues.apache.org/jira/browse/HDFS-13767 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13767-HDFS-12943.001.patch, > HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, > HDFS-13767.WIP.004.patch > > > This is a followup on HDFS-13688, where msync API is introduced to > {{ClientProtocol}} but the server side implementation is missing. This is > Jira is to implement the server side logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal reassigned HDDS-310: -- Assignee: Bharat Viswanadham > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 0.2.1 > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565659#comment-16565659 ] genericqa commented on HDFS-13767: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-13767 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13767 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933838/HDFS-13767.WIP.004.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/24683/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add msync server implementation. > > > Key: HDFS-13767 > URL: https://issues.apache.org/jira/browse/HDFS-13767 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, > HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch > > > This is a followup on HDFS-13688, where msync API is introduced to > {{ClientProtocol}} but the server side implementation is missing. This is > Jira is to implement the server side logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565657#comment-16565657 ] genericqa commented on HDFS-13421: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 34s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 51s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMetricsLogger | | | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13421 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933921/HDFS-13421-HDFS-12090.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4f67f7e09f3d 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HDFS-13767) Add msync server implementation.
[ https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-13767: -- Status: Patch Available (was: In Progress) > Add msync server implementation. > > > Key: HDFS-13767 > URL: https://issues.apache.org/jira/browse/HDFS-13767 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Attachments: HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, > HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch > > > This is a followup on HDFS-13688, where msync API is introduced to > {{ClientProtocol}} but the server side implementation is missing. This is > Jira is to implement the server side logic. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13688) Introduce msync API call
[ https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-13688: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-12943 Status: Resolved (was: Patch Available) I just committed this to HDFS-12943 branch. Thanks [~vagarychen]! > Introduce msync API call > > > Key: HDFS-13688 > URL: https://issues.apache.org/jira/browse/HDFS-13688 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS-13688-HDFS-12943.001.patch, > HDFS-13688-HDFS-12943.002.patch, HDFS-13688-HDFS-12943.002.patch, > HDFS-13688-HDFS-12943.003.patch, HDFS-13688-HDFS-12943.004.patch, > HDFS-13688-HDFS-12943.005.patch, HDFS-13688-HDFS-12943.WIP.002.patch, > HDFS-13688-HDFS-12943.WIP.patch > > > As mentioned in the design doc in HDFS-12943, to ensure consistent read, we > need to introduce an RPC call {{msync}}. Specifically, client can issue a > msync call to Observer node along with a transactionID. The msync will only > return when the Observer's transactionID has caught up to the given ID. This > JIRA is to add this API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13532) RBF: Adding security
[ https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565634#comment-16565634 ] CR Hota commented on HDFS-13532: Thanks [~ajayydv] for going through the doc. The main discussion (around cons for Approach 1) was around avoiding calls to KDC. Router does maintain a pool of connections, but that pool/connection gets recycled every x interval and new connections are created if needed again. The lesser this architecture overall relies on KDC, the better router can perform as a pure proxy with lower latencies. With end to end delegation token route, router remains more aligned as a proxy rather than a gateway. > RBF: Adding security > > > Key: HDFS-13532 > URL: https://issues.apache.org/jira/browse/HDFS-13532 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Sherwood Zheng >Priority: Major > Attachments: RBF _ Security delegation token thoughts.pdf, > Security_for_Router-based Federation_design_doc.pdf > > > HDFS Router based federation should support security. This includes > authentication and delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13738) fsck -list-corruptfileblocks has infinite loop if user is not privileged.
[ https://issues.apache.org/jira/browse/HDFS-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-13738: --- Target Version/s: 3.2.0, 3.1.1, 3.0.4 (was: 3.0.0) > fsck -list-corruptfileblocks has infinite loop if user is not privileged. > - > > Key: HDFS-13738 > URL: https://issues.apache.org/jira/browse/HDFS-13738 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.6.0, 3.0.0 > Environment: Kerberized Hadoop cluster >Reporter: Wei-Chiu Chuang >Assignee: Yuen-Kuei Hsueh >Priority: Major > Attachments: HDFS-13738.001.patch, HDFS-13738.test.patch > > > Found an interesting bug. > Execute following command as any non-privileged user: > {noformat} > # run fsck > $ hdfs fsck / -list-corruptfileblocks > {noformat} > {noformat} > FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds > Access denied for user systest. Superuser privilege is required > Fsck on path '/' FAILED > FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 0 milliseconds > Access denied for user systest. Superuser privilege is required > Fsck on path '/' FAILED > FSCK ended at Mon Jul 16 15:14:03 PDT 2018 in 1 milliseconds > Access denied for user systest. Superuser privilege is required > Fsck on path '/' FAILED > {noformat} > Reproducible on Hadoop 3.0.0 as well as 2.6.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.
Ewan Higgs created HDFS-13777: - Summary: [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work. Key: HDFS-13777 URL: https://issues.apache.org/jira/browse/HDFS-13777 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ewan Higgs Assignee: Ewan Higgs When the SyncService is running, it should periodically take snapshots, make a snapshotdiff, and then distribute DNA_BACKUP work to the Datanodes (See HDFS-13421). Upon completion of the work, the NN should update the AliasMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block
[ https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565399#comment-16565399 ] Jinglun commented on HDFS-10240: [~jojochuang] [~sinago] got it, i will post one in 3 days. > Race between close/recoverLease leads to missing block > -- > > Key: HDFS-10240 > URL: https://issues.apache.org/jira/browse/HDFS-10240 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zhouyingchao >Assignee: Jinglun >Priority: Major > Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, > HDFS-10240.test.patch > > > We got a missing block in our cluster, and logs related to the missing block > are as follows: > 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 > blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* > blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > recovery started, > primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW] > 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.internalReleaseLease: File XX has not been closed. Lease > recovery is in progress. RecoveryId = 153006357 for block > blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > 2016-03-28,10:00:06,248 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* > checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, > primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]} > has not reached minimal replication 1 > 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.5.53:11402 is added to > blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]} > size 139 > 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size > 139 > 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size > 139 > 2016-03-28,10:00:08,808 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345, > newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, > 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false) > 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in block map 153006345 > 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in block map 153006345 > 2016-03-28,10:00:08,837 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.5.44:11402 by /10.114.5.44 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in block map 153006345 > From the log, I guess this is what has happened
[jira] [Commented] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM
[ https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565395#comment-16565395 ] genericqa commented on HDDS-304: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} server-scm in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-304 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933916/HDDS-304.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ed747497967c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d920b9d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDDS-Build/679/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/679/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 1) | | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/679/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Process ContainerAction from
[jira] [Commented] (HDFS-10240) Race between close/recoverLease leads to missing block
[ https://issues.apache.org/jira/browse/HDFS-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565376#comment-16565376 ] Wei-Chiu Chuang commented on HDFS-10240: Ping. [~LiJinglun] [~sinago] appreciate your effort here. Would you like to post a new path with tests? Thanks! > Race between close/recoverLease leads to missing block > -- > > Key: HDFS-10240 > URL: https://issues.apache.org/jira/browse/HDFS-10240 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: zhouyingchao >Assignee: Jinglun >Priority: Major > Attachments: HDFS-10240 scenarios.jpg, HDFS-10240-001.patch, > HDFS-10240.test.patch > > > We got a missing block in our cluster, and logs related to the missing block > are as follows: > 2016-03-28,10:00:06,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocateBlock: XX. BP-219149063-10.108.84.25-1446859315800 > blk_1226490256_153006345{blockUCState=UNDER_CONSTRUCTION, > primaryNodeIndex=-1, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > 2016-03-28,10:00:06,205 INFO BlockStateChange: BLOCK* > blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > recovery started, > primary=ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW] > 2016-03-28,10:00:06,205 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.internalReleaseLease: File XX has not been closed. Lease > recovery is in progress. RecoveryId = 153006357 for block > blk_1226490256_153006345{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-3f5032bc-6006-4fcc-b0f7-b355a5b94f1b:NORMAL|RBW]]} > 2016-03-28,10:00:06,248 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* > checkFileProgress: blk_1226490256_153006345{blockUCState=COMMITTED, > primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]} > has not reached minimal replication 1 > 2016-03-28,10:00:06,358 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.5.53:11402 is added to > blk_1226490256_153006345{blockUCState=COMMITTED, primaryNodeIndex=2, > replicas=[ReplicaUnderConstruction[[DISK]DS-bcd22774-cf4d-45e9-a6a6-c475181271c9:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-ec1413ae-5541-4b44-8922-c928be3bb306:NORMAL|RBW], > > ReplicaUnderConstruction[[DISK]DS-85819f0d-bdbb-4a9b-b90c-eba078547c23:NORMAL|RBW]]} > size 139 > 2016-03-28,10:00:06,441 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.5.44:11402 is added to blk_1226490256_153006345 size > 139 > 2016-03-28,10:00:06,660 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.114.6.14:11402 is added to blk_1226490256_153006345 size > 139 > 2016-03-28,10:00:08,808 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > commitBlockSynchronization(lastblock=BP-219149063-10.108.84.25-1446859315800:blk_1226490256_153006345, > newgenerationstamp=153006357, newlength=139, newtargets=[10.114.6.14:11402, > 10.114.5.53:11402, 10.114.5.44:11402], closeFile=true, deleteBlock=false) > 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.6.14:11402 by /10.114.6.14 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in block map 153006345 > 2016-03-28,10:00:08,836 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.5.53:11402 by /10.114.5.53 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in block map 153006345 > 2016-03-28,10:00:08,837 INFO BlockStateChange: BLOCK > NameSystem.addToCorruptReplicasMap: blk_1226490256 added as corrupt on > 10.114.5.44:11402 by /10.114.5.44 because block is COMPLETE and reported > genstamp 153006357 does not match genstamp in
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565373#comment-16565373 ] Ewan Higgs commented on HDFS-13421: --- 009 - Removed unused config option. The other test failures look like incorrect use of Mockito by other tests. > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Status: Open (was: Patch Available) > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Status: Patch Available (was: Open) > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Attachment: HDFS-13421-HDFS-12090.009.patch > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch, HDFS-13421-HDFS-12090.009.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13776) Add Storage policies related ClientProtocol methods
[ https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565338#comment-16565338 ] Dibyendu Karmakar commented on HDFS-13776: -- I will upload the patch shortly. > Add Storage policies related ClientProtocol methods > --- > > Key: HDFS-13776 > URL: https://issues.apache.org/jira/browse/HDFS-13776 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Major > > Currently unsetStoragePolicy and getStoragePolicy are not implemented in > RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13776) Add Storage policies related ClientProtocol methods
Dibyendu Karmakar created HDFS-13776: Summary: Add Storage policies related ClientProtocol methods Key: HDFS-13776 URL: https://issues.apache.org/jira/browse/HDFS-13776 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Dibyendu Karmakar Assignee: Dibyendu Karmakar Currently unsetStoragePolicy and getStoragePolicy are not implemented in RouterRpcServer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM
[ https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-304: - Attachment: HDDS-304.000.patch > Process ContainerAction from datanode heartbeat in SCM > -- > > Key: HDDS-304 > URL: https://issues.apache.org/jira/browse/HDDS-304 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Affects Versions: 0.2.1 >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-304.000.patch > > > Datanodes send ContainerActions as part of heartbeat, we must add logic in > SCM to process those ContainerActions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM
[ https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDDS-304: - Affects Version/s: 0.2.1 Status: Patch Available (was: Open) > Process ContainerAction from datanode heartbeat in SCM > -- > > Key: HDDS-304 > URL: https://issues.apache.org/jira/browse/HDDS-304 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Affects Versions: 0.2.1 >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-304.000.patch > > > Datanodes send ContainerActions as part of heartbeat, we must add logic in > SCM to process those ContainerActions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565286#comment-16565286 ] genericqa commented on HDFS-13421: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12090 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 42s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 51s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} HDFS-12090 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} HDFS-12090 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | | hadoop.hdfs.server.datanode.TestDataNodeMetricsLogger | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDFS-13421 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12933893/HDFS-13421-HDFS-12090.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7ef48568083e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12090 / d52a2af | | maven |
[jira] [Updated] (HDDS-310) VolumeSet shutdown hook fails on datanode restart
[ https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-310: --- Summary: VolumeSet shutdown hook fails on datanode restart (was: VolumeSet shutdown hoot fails on datanode restart) > VolumeSet shutdown hook fails on datanode restart > - > > Key: HDDS-310 > URL: https://issues.apache.org/jira/browse/HDDS-310 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Priority: Major > Fix For: 0.2.1 > > > {code} > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread > Interrupted waiting to refresh disk information: sleep interrupted > 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > java.util.concurrent.ExecutionException: java.lang.IllegalStateException: > Shutdown in progress, cannot remove a shutdownHook > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:206) > at > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) > Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot > remove a shutdownHook > at > org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) > at > org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-310) VolumeSet shutdown hoot fails on datanode restart
Mukul Kumar Singh created HDDS-310: -- Summary: VolumeSet shutdown hoot fails on datanode restart Key: HDDS-310 URL: https://issues.apache.org/jira/browse/HDDS-310 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Affects Versions: 0.2.1 Reporter: Mukul Kumar Singh Fix For: 0.2.1 {code} 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread Interrupted waiting to refresh disk information: sleep interrupted 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Shutdown in progress, cannot remove a shutdownHook java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Shutdown in progress, cannot remove a shutdownHook at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68) Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot remove a shutdownHook at org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247) at org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317) at org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-230) Ozone Datanode exits during data write through Ratis
[ https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-230: --- Attachment: HDDS-230.001.patch > Ozone Datanode exits during data write through Ratis > > > Key: HDDS-230 > URL: https://issues.apache.org/jira/browse/HDDS-230 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.2.1 > > Attachments: HDDS-230.001.patch > > > Ozone datanode exits during data write with the following exception. > {code} > 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: > Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index > to:4565 > 2018-07-05 14:10:01,607 ERROR > org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit > status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the > StateMachineUpdater hits Throwable > java.lang.NullPointerException > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272) > at > org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058) > at > org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154) > at java.lang.Thread.run(Thread.java:745) > {code} > This might be as a result of a ratis transaction which was not written > through the "writeStateMachineData" phase, however it was added to the raft > log. This implied that stateMachineUpdater now applies a transaction without > the corresponding entry being added to the stateMachine. > I am raising this jira to track the issue and will also raise a Ratis jira if > required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR
[ https://issues.apache.org/jira/browse/HDFS-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565086#comment-16565086 ] Ewan Higgs commented on HDFS-13766: --- This was done in HADOOP-15576 patch 003. > HDFS Classes used for implementation of Multipart uploads to move to > hadoop-common JAR > -- > > Key: HDFS-13766 > URL: https://issues.apache.org/jira/browse/HDFS-13766 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Blocker > > the multipart upload API uses classes which are only in {{hadoop-hdfs-client}} > These need to be moved to hadoop-common so that cloud deployments which don't > have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can > implement and use the API. > Sorry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Status: Patch Available (was: Open) > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565080#comment-16565080 ] Ewan Higgs edited comment on HDFS-13421 at 8/1/18 10:03 AM: 008 - Fix aforementioned asflicense issue. - Renamed class on [~virajith]'s request. Regarding the DNA_BACKUP test, I have an end-to-end test for backup but this depends on code in the NN. I will be submitting the patch shortly to another ticket under HDFS-12090. was (Author: ehiggs): 008 - Fix aforementioned asflicense issue. - Renamed class on [~virajith]'s request. Regarding the DNA_BACKUP test, I have an end-to-end test for backup but this depends on code in the NN. I will be submitting the patch shortly. > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Attachment: HDFS-13421-HDFS-12090.008.patch > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565080#comment-16565080 ] Ewan Higgs commented on HDFS-13421: --- 008 - Fix aforementioned asflicense issue. - Renamed class on [~virajith]'s request. Regarding the DNA_BACKUP test, I have an end-to-end test for backup but this depends on code in the NN. I will be submitting the patch shortly. > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13766) HDFS Classes used for implementation of Multipart uploads to move to hadoop-common JAR
[ https://issues.apache.org/jira/browse/HDFS-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs reassigned HDFS-13766: - Assignee: Ewan Higgs > HDFS Classes used for implementation of Multipart uploads to move to > hadoop-common JAR > -- > > Key: HDFS-13766 > URL: https://issues.apache.org/jira/browse/HDFS-13766 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Ewan Higgs >Priority: Blocker > > the multipart upload API uses classes which are only in {{hadoop-hdfs-client}} > These need to be moved to hadoop-common so that cloud deployments which don't > have the hdfs-client JAR on their CP (HD/I, possibly google dataproc) can > implement and use the API. > Sorry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13421) [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode
[ https://issues.apache.org/jira/browse/HDFS-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-13421: -- Status: Open (was: Patch Available) > [PROVIDED Phase 2] Implement DNA_BACKUP command in Datanode > --- > > Key: HDFS-13421 > URL: https://issues.apache.org/jira/browse/HDFS-13421 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13421-HDFS-12090.001.patch, > HDFS-13421-HDFS-12090.002.patch, HDFS-13421-HDFS-12090.003.patch, > HDFS-13421-HDFS-12090.004.patch, HDFS-13421-HDFS-12090.005.patch, > HDFS-13421-HDFS-12090.006.patch, HDFS-13421-HDFS-12090.007.patch, > HDFS-13421-HDFS-12090.008.patch > > > HDFS-13310 introduces an API for DNA_BACKUP. Here, we implement DNA_BACKUP > command in Datanode. > These have been broken up to make reviewing it easier. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-278) ozone integration test sometime crashes
[ https://issues.apache.org/jira/browse/HDDS-278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565063#comment-16565063 ] Mukul Kumar Singh commented on HDDS-278: I was able to reproduce this issue locally and the root cause for this HDDS-230. However we should be able to handle abrupt shutdown of nodes and the test should not crash. > ozone integration test sometime crashes > --- > > Key: HDDS-278 > URL: https://issues.apache.org/jira/browse/HDDS-278 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Priority: Major > Attachments: org.apache.hadoop.ozone.freon.TestFreon-output.txt > > > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid7864.log > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-278) ozone integration test sometime crashes
[ https://issues.apache.org/jira/browse/HDDS-278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDDS-278: --- Attachment: org.apache.hadoop.ozone.freon.TestFreon-output.txt > ozone integration test sometime crashes > --- > > Key: HDDS-278 > URL: https://issues.apache.org/jira/browse/HDDS-278 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Priority: Major > Attachments: org.apache.hadoop.ozone.freon.TestFreon-output.txt > > > {code} > Lines that start with ? in the ASF License report indicate files that do > not have an Apache license header: > !? /testptch/hadoop/hadoop-ozone/integration-test/hs_err_pid7864.log > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-230) Ozone Datanode exits during data write through Ratis
[ https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565058#comment-16565058 ] Mukul Kumar Singh commented on HDDS-230: Looked into this issue and was able to reproduce on a cluster. on leader: {code} 2018-07-05 18:09:35,474 [grpc-default-executor-10] INFO - adding chunk:8f10dd8e0e8a4fa236ffb1ec1f40bdc2_stream_35d91bc0-6b33-485d-bce6-19a96557180c_chunk_1 for container:14 {code} on follower1: {code} 2018-07-05 18:09:35,575 [grpc-default-executor-3] INFO - adding chunk:8f10dd8e0e8a4fa236ffb1ec1f40bdc2_stream_35d91bc0-6b33-485d-bce6-19a96557180c_chunk_1 for container:14 {code} on follower 2, which went into a stop the world gc before this transaction. {code} 2018-07-05 14:10:01,606 [StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858] INFO - removing chunk:8f10dd8e0e8a4fa236ffb1ec1f40bdc2_stream_35d91bc0-6b33-485d-bce6-19a96557180c_chunk_1 for container:14 {code} This is the case where a transaction was committed on the leader and one follower and the leader discarded the cache after that. The new follower which picks up after this will request for new append entries where the state machine data has been discarded. This issue has been fixed in Ratis using RATIS-281, where state machine provides and api called readStateMachineData, where statemachine can plugin stateMachineData which is missing inside Ratis. This jira proposes to fix the issue with changes in ContainerStateMachine to provide the statemachine data to the ratis leader. > Ozone Datanode exits during data write through Ratis > > > Key: HDDS-230 > URL: https://issues.apache.org/jira/browse/HDDS-230 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.2.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Critical > Fix For: 0.2.1 > > > Ozone datanode exits during data write with the following exception. > {code} > 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: > Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index > to:4565 > 2018-07-05 14:10:01,607 ERROR > org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit > status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the > StateMachineUpdater hits Throwable > java.lang.NullPointerException > at > org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272) > at > org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058) > at > org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154) > at java.lang.Thread.run(Thread.java:745) > {code} > This might be as a result of a ratis transaction which was not written > through the "writeStateMachineData" phase, however it was added to the raft > log. This implied that stateMachineUpdater now applies a transaction without > the corresponding entry being added to the stateMachine. > I am raising this jira to track the issue and will also raise a Ratis jira if > required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-309) closePipelineIfNoOpenContainers should remove pipeline from activePipelines list.
[ https://issues.apache.org/jira/browse/HDDS-309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565035#comment-16565035 ] Mukul Kumar Singh commented on HDDS-309: Thanks for looking into this issue [~candychencan]. A pipeline in ozone will follow the state transitions as defined in PipelineSelector#updatePipelineState. A pipeline first moved to closing state and then moved to closed state. Before moving the pipeline to closing state, the pipeline will be removed from active pipeline using PipelineSelector#finalizePipeline. > closePipelineIfNoOpenContainers should remove pipeline from activePipelines > list. > - > > Key: HDDS-309 > URL: https://issues.apache.org/jira/browse/HDDS-309 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: chencan >Priority: Minor > > Function closePipeline remove pipeline from pipelineMap and > node2PipelineMap.If closePipeline is called by > closePipelineIfNoOpenContainers,Are we supposed to remove pipeline from > activePipelines? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-309) closePipelineIfNoOpenContainers should remove pipeline from activePipelines list.
chencan created HDDS-309: Summary: closePipelineIfNoOpenContainers should remove pipeline from activePipelines list. Key: HDDS-309 URL: https://issues.apache.org/jira/browse/HDDS-309 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: chencan Function closePipeline remove pipeline from pipelineMap and node2PipelineMap.If closePipeline is called by closePipelineIfNoOpenContainers,Are we supposed to remove pipeline from activePipelines? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-308) SCM should identify a container with pending deletes using container reports
[ https://issues.apache.org/jira/browse/HDDS-308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16564945#comment-16564945 ] genericqa commented on HDDS-308: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdds_server-scm generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s{color} | {color:green} container-service in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 34s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 15s{color} | {color:green} integration-test in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 | | JIRA Issue | HDDS-308 | | JIRA Patch URL |
[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches
[ https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16564942#comment-16564942 ] Gabor Bota commented on HDFS-13322: --- Thanks for working on this [~pifta], [~fabbri]! > fuse dfs - uid persists when switching between ticket caches > > > Key: HDFS-13322 > URL: https://issues.apache.org/jira/browse/HDFS-13322 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 2.6.0 > Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed > Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux > >Reporter: Alex Volskiy >Assignee: Istvan Fajth >Priority: Minor > Fix For: 3.2.0 > > Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, > HDFS-13322.003.patch, TestFuse.java, TestFuse2.java, catter.sh, catter2.sh, > perftest_new_behaviour_10k_different_1KB.txt, perftest_new_behaviour_1B.txt, > perftest_new_behaviour_1KB.txt, perftest_new_behaviour_1MB.txt, > perftest_old_behaviour_10k_different_1KB.txt, perftest_old_behaviour_1B.txt, > perftest_old_behaviour_1KB.txt, perftest_old_behaviour_1MB.txt, > testHDFS-13322.sh, test_after_patch.out, test_before_patch.out > > > The symptoms of this issue are the same as described in HDFS-3608 except the > workaround that was applied (detect changes in UID ticket cache) doesn't > resolve the issue when multiple ticket caches are in use by the same user. > Our use case requires that a job scheduler running as a specific uid obtain > separate kerberos sessions per job and that each of these sessions use a > separate cache. When switching sessions this way, no change is made to the > original ticket cache so the cached filesystem instance doesn't get > regenerated. > > {{$ export KRB5CCNAME=/tmp/krb5cc_session1}} > {{$ kinit user_a@domain}} > {{$ touch /fuse_mount/tmp/testfile1}} > {{$ ls -l /fuse_mount/tmp/testfile1}} > {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}} > {{$ export KRB5CCNAME=/tmp/krb5cc_session2}} > {{$ kinit user_b@domain}} > {{$ touch /fuse_mount/tmp/testfile2}} > {{$ ls -l /fuse_mount/tmp/testfile2}} > {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}} > {{ }}{color:#d04437}*{{** expected owner to be user_b **}}*{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-284) Interleaving CRC for ChunksData
[ https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16564824#comment-16564824 ] genericqa commented on HDDS-284: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-ozone/integration-test {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s{color} | {color:green} container-service in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} ozone-manager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green}
[jira] [Updated] (HDDS-308) SCM should identify a container with pending deletes using container reports
[ https://issues.apache.org/jira/browse/HDDS-308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDDS-308: - Status: Patch Available (was: Open) > SCM should identify a container with pending deletes using container reports > > > Key: HDDS-308 > URL: https://issues.apache.org/jira/browse/HDDS-308 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Fix For: 0.2.1 > > Attachments: HDDS-308.001.patch > > > SCM should fire an event when it finds using container report that a > container's deleteTransactionID does not match SCM's deleteTransactionId. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org