[ 
https://issues.apache.org/jira/browse/HDFS-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13752092#comment-13752092
 ] 

Vinay commented on HDFS-5031:
-----------------------------

Thanks [~arpitagarwal] for taking look at Jira,
bq. Could you please describe your approach briefly to make the code review 
easier?

Sure.

There were multiple issues:
1.  Storing and retrieval from {{blockMap}}. Basically problem was introduced 
when LightWeightGSet was used instead of HashMap for 'blockMap' in 
BlockPoolSliceScanner and BlockScanInfo.equals() was re-written.
{{BlockScanInfo.equals()}} is strictly checking for the instance of 
BlockScanInfo, but in almost all retrievals from {{blockMap}} are done using 
instance of {{Block}}, so always will get null value and hence scan will happen 
again.

2. {{logIterator.isPrevious()}} was mistakenly considering lastEntry in 
previous dncp log to be present in current. This was happening when prev log's 
last entry was read and before returning the entry, stream was being opened to 
read current dncp log. At that time  {{logIterator.isPrevious()}} was returning 
false. So every log roll was missing one entry from scan log info. Hence scan 
for these missed blocks will happen again before scan period (i.e. 21 days by 
default). 

3. After fix of earlier 2 issues, one more issue will come with the invalid 
value of {{bytesLeft}}. After one log roll, scanning itself will not happen 
until we write some bunch of blocks (actually equal to same number of bytes 
before roll) again. This is because {{bytesLeft}} should be incremented when 
the block was added and should be decremented when the block is scanned. But it 
was decrementing everytime roll happens. this was taking {{bytesLeft}} to 
negative value, hence scanner was just returning from 
{{workRemainingInCurrentPeriod()}} without scanning new blocks.



                
> BlockScanner scans the block multiple times and on restart scans everything
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-5031
>                 URL: https://issues.apache.org/jira/browse/HDFS-5031
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0, 2.1.0-beta
>            Reporter: Vinay
>            Assignee: Vinay
>         Attachments: HDFS-5031.patch
>
>
> BlockScanner scans the block twice, also on restart of datanode scans 
> everything.
> Steps:
> 1. Write blocks with interval of more than 5 seconds. write new block on 
> completion of scan for written block.
> Each time datanode scans new block, it also scans, previous block which is 
> already scanned. 
> Now after restart, datanode scans all blocks again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to