[ 
https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14986273#comment-14986273
 ] 

Daryn Sharp commented on HDFS-8674:
-----------------------------------

[~mingma] It's a LinkedHashSet to cover the cases you state.  

bq. The motivation of randomization is to make sure we don't end up scanning 
the same blocks if there isn't much update to postponedMisreplicatedBlocks. 
<...> What is the reason to change from HashSet to LinkedHashSet, to have some 
order guarantee?  HashSet is faster than LinkedHashSet. <...> During the scan, 
if a block is marked as POSTPONE, it will be removed from 
postponedMisreplicatedBlocks first and add it back later via 
rescannedMisreplicatedBlocks. Can it just remove those blocks from 
postponedMisreplicatedBlocks that aren't marked as POSTPONE without using 
rescannedMisreplicatedBlocks?

The LHS iterator is effectively functioning as a cheap circular array with 
acceptable insert/remove performance.  Yes, the order guarantee ensures all 
blocks are visited w/o "skipping" to a random location.  The block always has 
to be removed and re-inserted for the ordering to work.  Otherwise the same 
blocks will be scanned over and over again.

bq. Does the latency of couple seconds come from HashSet iteration? Some quick 
test indicates iterating through 5M entries of a HashSet take around 50ms.

Yes, the cycles wasted to skip the through the set were the killer.  A 
micro-benchmark doesn't capture the same chaotic runtime environment of over a 
100 threads competing for resources.  Kihwal says (ironically) the performance 
"improved" under 5M.

In multiple production incidents, the postponed queue backed up with 10-20M+ 
blocks.  Scans determined the blocks are all still postponed.  Pre-patch, the 
cycles took up to a few seconds and averaged in the many hundreds of ms.  
Performance was obviously atrocious.  Post-patch, same scans took no more than 
a few ms.



> Improve performance of postponed block scans
> --------------------------------------------
>
>                 Key: HDFS-8674
>                 URL: https://issues.apache.org/jira/browse/HDFS-8674
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: HDFS
>    Affects Versions: 2.6.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>         Attachments: HDFS-8674.patch
>
>
> When a standby goes active, it marks all nodes as "stale" which will cause 
> block invalidations for over-replicated blocks to be queued until full block 
> reports are received from the nodes with the block.  The replication monitor 
> scans the queue with O(N) runtime.  It picks a random offset and iterates 
> through the set to randomize blocks scanned.
> The result is devastating when a cluster loses multiple nodes during a 
> rolling upgrade. Re-replication occurs, the nodes come back, the excess block 
> invalidations are postponed. Rescanning just 2k blocks out of millions of 
> postponed blocks may take multiple seconds. During the scan, the write lock 
> is held which stalls all other processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to