[
https://issues.apache.org/jira/browse/HDDS-1200?focusedWorklogId=283488&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-283488
]
ASF GitHub Bot logged work on HDDS-1200:
----------------------------------------
Author: ASF GitHub Bot
Created on: 26/Jul/19 17:13
Start Date: 26/Jul/19 17:13
Worklog Time Spent: 10m
Work Description: hgadre commented on pull request #1154: [HDDS-1200] Add
support for checksum verification in data scrubber
URL: https://github.com/apache/hadoop/pull/1154#discussion_r307833921
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java
##########
@@ -220,43 +229,66 @@ private void checkBlockDB() throws IOException {
throw new IOException(dbFileErrorMsg);
}
-
onDiskContainerData.setDbFile(dbFile);
try(ReferenceCountedDB db =
- BlockUtils.getDB(onDiskContainerData, checkConfig)) {
- iterateBlockDB(db);
- }
- }
+ BlockUtils.getDB(onDiskContainerData, checkConfig);
+ KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
+ new File(onDiskContainerData.getContainerPath()))) {
- private void iterateBlockDB(ReferenceCountedDB db)
- throws IOException {
- Preconditions.checkState(db != null);
-
- // get "normal" keys from the Block DB
- try(KeyValueBlockIterator kvIter = new KeyValueBlockIterator(containerID,
- new File(onDiskContainerData.getContainerPath()))) {
-
- // ensure there is a chunk file for each key in the DB
- while (kvIter.hasNext()) {
+ while(kvIter.hasNext()) {
BlockData block = kvIter.nextBlock();
-
- List<ContainerProtos.ChunkInfo> chunkInfoList = block.getChunks();
- for (ContainerProtos.ChunkInfo chunk : chunkInfoList) {
- File chunkFile;
- chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
+ for(ContainerProtos.ChunkInfo chunk : block.getChunks()) {
+ File chunkFile = ChunkUtils.getChunkFile(onDiskContainerData,
ChunkInfo.getFromProtoBuf(chunk));
-
if (!chunkFile.exists()) {
// concurrent mutation in Block DB? lookup the block again.
byte[] bdata = db.getStore().get(
Longs.toByteArray(block.getBlockID().getLocalID()));
- if (bdata == null) {
- LOG.trace("concurrency with delete, ignoring deleted block");
- break; // skip to next block from kvIter
- } else {
- String errorStr = "Missing chunk file "
- + chunkFile.getAbsolutePath();
- throw new IOException(errorStr);
+ if (bdata != null) {
+ throw new IOException("Missing chunk file "
+ + chunkFile.getAbsolutePath());
+ }
+ } else if (chunk.getChecksumData().getType()
+ != ContainerProtos.ChecksumType.NONE){
Review comment:
Ok let me refactor. Regarding second question - i want to avoid disk I/O
when we know that we don't have checksum to verify against.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 283488)
Time Spent: 2h 10m (was: 2h)
> Ozone Data Scrubbing : Checksum verification for chunks
> -------------------------------------------------------
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
> Issue Type: Sub-task
> Reporter: Supratim Deka
> Assignee: Hrishikesh Gadre
> Priority: Critical
> Labels: pull-request-available
> Time Spent: 2h 10m
> Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]