[ 
https://issues.apache.org/jira/browse/HDFS-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16531431#comment-16531431
 ] 

Andrew Wang commented on HDFS-13672:
------------------------------------

Hi Gabor, to respond to your question, I think the very common case will be 
zero lazy persist files, the rare case being "some" (~thousands), and the very 
very rare case lots (~millions).

I agree that holding the lock for a long time is an anti-pattern. I actually 
had a patch I was working on a while ago for a different feature that added a 
safe way of iterating over the block map.

However, for this case I don't know if it's worth spending a lot of time 
optimizing, since the # of corrupt blocks in the system is normally not that 
large. It's a rare situation that a NameNode is transitioned to active while 
missing a lot of DNs like this (and why we have startup safemode checks). This 
probably only happens during debugging, in which case we could also solve this 
problem by setting the scrubber interval to 0 to disable it.

[~jojochuang] what do you think?

> clearCorruptLazyPersistFiles could crash NameNode
> -------------------------------------------------
>
>                 Key: HDFS-13672
>                 URL: https://issues.apache.org/jira/browse/HDFS-13672
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Wei-Chiu Chuang
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HDFS-13672.001.patch, HDFS-13672.002.patch
>
>
> I started a NameNode on a pretty large fsimage. Since the NameNode is started 
> without any DataNodes, all blocks (100 million) are "corrupt".
> Afterwards I observed FSNamesystem#clearCorruptLazyPersistFiles() held write 
> lock for a long time:
> {noformat}
> 18/06/12 12:37:03 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 46024 ms via
> java.lang.Thread.getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:198)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1689)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.clearCorruptLazyPersistFiles(FSNamesystem.java:5532)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:5543)
> java.lang.Thread.run(Thread.java:748)
>         Number of suppressed write-lock reports: 0
>         Longest write-lock held interval: 46024
> {noformat}
> Here's the relevant code:
> {code}
>       writeLock();
>       try {
>         final Iterator<BlockInfo> it =
>             blockManager.getCorruptReplicaBlockIterator();
>         while (it.hasNext()) {
>           Block b = it.next();
>           BlockInfo blockInfo = blockManager.getStoredBlock(b);
>           if (blockInfo.getBlockCollection().getStoragePolicyID() == 
> lpPolicy.getId()) {
>             filesToDelete.add(blockInfo.getBlockCollection());
>           }
>         }
>         for (BlockCollection bc : filesToDelete) {
>           LOG.warn("Removing lazyPersist file " + bc.getName() + " with no 
> replicas.");
>           changed |= deleteInternal(bc.getName(), false, false, false);
>         }
>       } finally {
>         writeUnlock();
>       }
> {code}
> In essence, the iteration over corrupt replica list should be broken down 
> into smaller iterations to avoid a single long wait.
> Since this operation holds NameNode write lock for more than 45 seconds, the 
> default ZKFC connection timeout, it implies an extreme case like this (100 
> million corrupt blocks) could lead to NameNode failover.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to