[ 
https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317511#comment-14317511
 ] 

Colin Patrick McCabe commented on HDFS-7686:
--------------------------------------------

bq. Unused Iterator import in the test, LoadingCache import in VolumeScanner

ok

bq. I like the cache since it's a nice way of preventing scanning the same 
blocks over and over again, but it'd be good to also use a LinkedHashMap 
instead of the LinkedList and also check existence in there before adding. That 
way we never have dupes in the suspect queue. It seems possible to have a 
working set bigger than the 1000 element cache size, like if an entire disk 
goes bad.

good idea... I'll use a {{LinkedHashSet}}.

> Re-add rapid rescan of possibly corrupt block feature to the block scanner
> --------------------------------------------------------------------------
>
>                 Key: HDFS-7686
>                 URL: https://issues.apache.org/jira/browse/HDFS-7686
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>            Reporter: Rushabh S Shah
>            Assignee: Colin Patrick McCabe
>            Priority: Blocker
>         Attachments: HDFS-7686.002.patch
>
>
> When doing a transferTo (aka sendfile operation) from the DataNode to a 
> client, we may hit an I/O error from the disk.  If we believe this is the 
> case, we should be able to tell the block scanner to rescan that block soon.  
> The feature was originally implemented in HDFS-7548 but was removed by 
> HDFS-7430.  We should re-add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to