[
https://issues.apache.org/jira/browse/HDDS-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hanisha Koneru updated HDDS-5359:
---------------------------------
Description:
Here are default column family data of two different container replicas,
"#BCSID" -> 1354765
"#BLOCKCOUNT" -> -21
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 78
"#delTX" -> 1141106
"#BCSID" -> 1895040
"#BLOCKCOUNT" -> -5
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 106
"#delTX" -> 1146817
Update:
The BlockCount is incremented only when the Stream is closed and not when the
BlockID is added to the DB. If the OutputStream was not closed properly or if,
for any reason, the client starts writing to a new pipeline before the full
block is written, it could lead to a Block being present in the container but
the _key_count_ (BlockCount) not being incremented for it.
In the general case, the _used_bytes_ metadata is updated correctly i.e.
whenever a chunk is written and putBlock is called. But when a chunk is
overwritten, then it is assumed that the size of the chunk remains the same.
But it’s possible to overwrite more data into the chunk than originally
present. In this case, the _used_bytes_ should be updated with difference in
the chunkSizes.
When blocks are deleted from a container, the _key_count_ and _used_bytes_ is
decremented accordingly. But if these values were incorrect (less than the
actual), then deleting blocks could lead to negative values for these metadata.
was:
Here are default column family data of two different container replicas,
"#BCSID" -> 1354765
"#BLOCKCOUNT" -> -21
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 78
"#delTX" -> 1141106
"#BCSID" -> 1895040
"#BLOCKCOUNT" -> -5
"#BYTESUSED" -> 0
"#PENDINGDELETEBLOCKCOUNT" -> 106
"#delTX" -> 1146817
> Incorrect BLOCKCOUNT and BYTESUSED in container DB
> --------------------------------------------------
>
> Key: HDDS-5359
> URL: https://issues.apache.org/jira/browse/HDDS-5359
> Project: Apache Ozone
> Issue Type: Bug
> Reporter: Sammi Chen
> Assignee: Hanisha Koneru
> Priority: Major
> Attachments: negative.txt
>
>
> Here are default column family data of two different container replicas,
> "#BCSID" -> 1354765
> "#BLOCKCOUNT" -> -21
> "#BYTESUSED" -> 0
> "#PENDINGDELETEBLOCKCOUNT" -> 78
> "#delTX" -> 1141106
> "#BCSID" -> 1895040
> "#BLOCKCOUNT" -> -5
> "#BYTESUSED" -> 0
> "#PENDINGDELETEBLOCKCOUNT" -> 106
> "#delTX" -> 1146817
>
> Update:
> The BlockCount is incremented only when the Stream is closed and not when the
> BlockID is added to the DB. If the OutputStream was not closed properly or
> if, for any reason, the client starts writing to a new pipeline before the
> full block is written, it could lead to a Block being present in the
> container but the _key_count_ (BlockCount) not being incremented for it.
> In the general case, the _used_bytes_ metadata is updated correctly i.e.
> whenever a chunk is written and putBlock is called. But when a chunk is
> overwritten, then it is assumed that the size of the chunk remains the same.
> But it’s possible to overwrite more data into the chunk than originally
> present. In this case, the _used_bytes_ should be updated with difference in
> the chunkSizes.
> When blocks are deleted from a container, the _key_count_ and _used_bytes_ is
> decremented accordingly. But if these values were incorrect (less than the
> actual), then deleting blocks could lead to negative values for these
> metadata.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]