[ https://issues.apache.org/jira/browse/HDFS-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16744579#comment-16744579 ]
Akira Ajisaka commented on HDFS-14158: -------------------------------------- {code} if(now >= lastCheckpointTime + periodMSec) { shouldCheckpoint = true; } else { {code} Sorry I missed that periodMSec is used for not only sleep time but also checkpoint interval. bq. Timo Walter, I suggest take a look at the patch in HDFS-1572 and incorporate the logic to yours if makes sense. +1, thanks [~kihwal] for the suggestion. > Checkpointer ignores configured time period > 5 minutes > ------------------------------------------------------- > > Key: HDFS-14158 > URL: https://issues.apache.org/jira/browse/HDFS-14158 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.8.1 > Reporter: Timo Walter > Priority: Minor > Labels: checkpoint, hdfs, namenode > > The checkpointer always triggers a checkpoint every 5 minutes and ignores the > flag "*dfs.namenode.checkpoint.period*", if its greater than 5 minutes. > See the code below (in Checkpointer.java): > {code:java} > //Main work loop of the Checkpointer > public void run() { > // Check the size of the edit log once every 5 minutes. > long periodMSec = 5 * 60; // 5 minutes > if(checkpointConf.getPeriod() < periodMSec) { > periodMSec = checkpointConf.getPeriod(); > } > {code} > If the configured period ("*dfs.namenode.checkpoint.period*") is lower than 5 > minutes, you choose use the configured one. But it always ignores it, if it's > greater than 5 minutes. > > In my opinion, the if-expression should be: > {code:java} > if(checkpointConf.getPeriod() > periodMSec) { > periodMSec = checkpointConf.getPeriod(); > } > {code} > > Then "*dfs.namenode.checkpoint.period*" won't get ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org