[ 
https://issues.apache.org/jira/browse/HDDS-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-5359:
---------------------------------
    Description: 
Here are default column family data of two different container replicas,

"#BCSID" -> 1354765
"#BLOCKCOUNT" -> -21
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 78
"#delTX" -> 1141106

"#BCSID" -> 1895040
"#BLOCKCOUNT" -> -5
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 106
"#delTX" -> 1146817

 

Update: 

The BlockCount is incremented only when the Stream is closed and not when the 
BlockID is added to the DB. If the OutputStream was not closed properly or if, 
for any reason, the client starts writing to a new pipeline before the full 
block is written, it could lead to a Block being present in the container but 
the key_count (BlockCount) not being incremented for it. When a block is 
deleted from a container, the blockCount is also decremented. But if the 
blockCount is wrong to start with, it could lead to a negative value.

When a block is deleted, usedBytes is decrement in memory first after deleting 
a chunk. And even if the chunkFile does not exist (already deleted), the 
usedBytes is decremented. This could lead to usedBytes being decremented 
multiple times for the same chunk and causing the total usedBytes metadata in 
the DB to become negative. Once all the chunks in all the blocks in that 
iteration of BlockDeletingService task are deleted, only then is the usedBytes 
updated in containerDB by taking the in-memory value. This Jira proposes to 
first update the DB with correct usedBytes (calculated from the BlockInfo after 
all chunks are deleted) and then update the in-memory metadata. This is the 
update sequence logic followed for all other state updates. 

Also, when a chunk is overwritten, then it is assumed that the size of the 
chunk remains the same. But it’s possible to overwrite more data into the chunk 
than originally present. In this case, the used_bytes should be updated with 
difference in the chunkSizes. (Adding this as a TODO).

  was:
Here are default column family data of two different container replicas,

"#BCSID" -> 1354765
"#BLOCKCOUNT" -> -21
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 78
"#delTX" -> 1141106

"#BCSID" -> 1895040
"#BLOCKCOUNT" -> -5
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 106
"#delTX" -> 1146817

 

Update: 

The BlockCount is incremented only when the Stream is closed and not when the 
BlockID is added to the DB. If the OutputStream was not closed properly or if, 
for any reason, the client starts writing to a new pipeline before the full 
block is written, it could lead to a Block being present in the container but 
the _key_count_ (BlockCount) not being incremented for it. 

In the general case, the _used_bytes_ metadata is updated correctly i.e. 
whenever a chunk is written and putBlock is called. But when a chunk is 
overwritten, then it is assumed that the size of the chunk remains the same. 
But it’s possible to overwrite more data into the chunk than originally 
present. In this case, the _used_bytes_ should be updated with difference in 
the chunkSizes.

When blocks are deleted from a container, the _key_count_ and _used_bytes_ is 
decremented accordingly. But if these values were incorrect (less than the 
actual), then deleting blocks could lead to negative values for these metadata.


> Incorrect BLOCKCOUNT and BYTESUSED in container DB
> --------------------------------------------------
>
>                 Key: HDDS-5359
>                 URL: https://issues.apache.org/jira/browse/HDDS-5359
>             Project: Apache Ozone
>          Issue Type: Bug
>            Reporter: Sammi Chen
>            Assignee: Hanisha Koneru
>            Priority: Major
>         Attachments: negative.txt
>
>
> Here are default column family data of two different container replicas,
> "#BCSID" -> 1354765
> "#BLOCKCOUNT" -> -21
> "#BYTESUSED" -> 0
> "#PENDINGDELETEBLOCKCOUNT" -> 78
> "#delTX" -> 1141106
> "#BCSID" -> 1895040
> "#BLOCKCOUNT" -> -5
> "#BYTESUSED" -> 0
> "#PENDINGDELETEBLOCKCOUNT" -> 106
> "#delTX" -> 1146817
>  
> Update: 
> The BlockCount is incremented only when the Stream is closed and not when the 
> BlockID is added to the DB. If the OutputStream was not closed properly or 
> if, for any reason, the client starts writing to a new pipeline before the 
> full block is written, it could lead to a Block being present in the 
> container but the key_count (BlockCount) not being incremented for it. When a 
> block is deleted from a container, the blockCount is also decremented. But if 
> the blockCount is wrong to start with, it could lead to a negative value.
> When a block is deleted, usedBytes is decrement in memory first after 
> deleting a chunk. And even if the chunkFile does not exist (already deleted), 
> the usedBytes is decremented. This could lead to usedBytes being decremented 
> multiple times for the same chunk and causing the total usedBytes metadata in 
> the DB to become negative. Once all the chunks in all the blocks in that 
> iteration of BlockDeletingService task are deleted, only then is the 
> usedBytes updated in containerDB by taking the in-memory value. This Jira 
> proposes to first update the DB with correct usedBytes (calculated from the 
> BlockInfo after all chunks are deleted) and then update the in-memory 
> metadata. This is the update sequence logic followed for all other state 
> updates. 
> Also, when a chunk is overwritten, then it is assumed that the size of the 
> chunk remains the same. But it’s possible to overwrite more data into the 
> chunk than originally present. In this case, the used_bytes should be updated 
> with difference in the chunkSizes. (Adding this as a TODO).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to