[ 
https://issues.apache.org/jira/browse/HDDS-6404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Gui updated HDDS-6404:
---------------------------
    Description: 
To put, get or iterate through keys via different schemas, we have to use 
different key prefixes since we have introduced schema v3 with containerID as 
prefixes in this feature.

Then we should abstract the key format for db put and get for all three schemas 
and we could add containerID as key prefix for schema v3.

For example:

The original way as follows, it could not be used to put & get per-disk Rocksdb 
instance since we don't have the container ID as prefix here:

 
{code:java}
metadataTable.putWithBatch(batchOperation, CONTAINER_BYTES_USED, 
getBytesUsed());         // use raw const defined
metadataTable.putWithBatch(batchOperation, BLOCK_COUNT, getKeyCount() - 
deletedBlockCount); metadataTable.putWithBatch(batchOperation, 
PENDING_DELETE_BLOCK_COUNT,
                           (long)(getNumPendingDeletionBlocks() - 
deletedBlockCount));
{code}
 

The new way as follows:

 
{code:java}
metadataTable.putWithBatch(batchOperation, bytesUsedKey(), getBytesUsed());     
  // use a wrapper helper
metadataTable.putWithBatch(batchOperation, blockCountKey(), getKeyCount() - 
deletedBlockCount);
metadataTable.putWithBatch(batchOperation, pendingDeleteBlockCountKey(),
                           (long)(getNumPendingDeletionBlocks() - 
deletedBlockCount)); {code}
 

So we wrap the schema check and add prefix logic into a helper function above.

These helper function will return a formatted key with prefix for schema v3 and 
the original const for schema v1 & v2.

 

NOTE: This will be only be a refactoring patch that return the original values 
for the keys for schema v1 & v2, after we introduced schema v3,

we'll add logics to return keys with prefixes in future patches.

  was:
To put, get or iterate through keys via different schemas, we have to use 
different key prefixes since we have introduced schema v3 with containerID as 
prefixes in this feature.

Then we should abstract the key format for db put and get for all three schemas 
and we could add containerID as key prefix for schema v3.

For example:

The original way as follows, it could not be used to put & get per-disk Rocksdb 
instance since we don't have the container ID as prefix here:

 
{code:java}
metadataTable.putWithBatch(batchOperation, CONTAINER_BYTES_USED, 
getBytesUsed());         // use raw const defined
metadataTable.putWithBatch(batchOperation, BLOCK_COUNT, getKeyCount() - 
deletedBlockCount); metadataTable.putWithBatch(batchOperation, 
PENDING_DELETE_BLOCK_COUNT,
                           (long)(getNumPendingDeletionBlocks() - 
deletedBlockCount));
{code}
 

The new way as follows:

 
{code:java}
metadataTable.putWithBatch(batchOperation, bytesUsedKey(), getBytesUsed());     
  // use a wrapper helper
metadataTable.putWithBatch(batchOperation, blockCountKey(), getKeyCount() - 
deletedBlockCount);
metadataTable.putWithBatch(batchOperation, pendingDeleteBlockCountKey(),
                           (long)(getNumPendingDeletionBlocks() - 
deletedBlockCount)); {code}
 

So we wrap the schema check and add prefix logic into a helper function above.

These helper function will return a formatted key with prefix for schema v3 and 
the original const for schema v1 & v2.


> Format table key according to schema in KeyValueContainerData.
> --------------------------------------------------------------
>
>                 Key: HDDS-6404
>                 URL: https://issues.apache.org/jira/browse/HDDS-6404
>             Project: Apache Ozone
>          Issue Type: Sub-task
>            Reporter: Mark Gui
>            Assignee: Mark Gui
>            Priority: Major
>
> To put, get or iterate through keys via different schemas, we have to use 
> different key prefixes since we have introduced schema v3 with containerID as 
> prefixes in this feature.
> Then we should abstract the key format for db put and get for all three 
> schemas and we could add containerID as key prefix for schema v3.
> For example:
> The original way as follows, it could not be used to put & get per-disk 
> Rocksdb instance since we don't have the container ID as prefix here:
>  
> {code:java}
> metadataTable.putWithBatch(batchOperation, CONTAINER_BYTES_USED, 
> getBytesUsed());         // use raw const defined
> metadataTable.putWithBatch(batchOperation, BLOCK_COUNT, getKeyCount() - 
> deletedBlockCount); metadataTable.putWithBatch(batchOperation, 
> PENDING_DELETE_BLOCK_COUNT,
>                            (long)(getNumPendingDeletionBlocks() - 
> deletedBlockCount));
> {code}
>  
> The new way as follows:
>  
> {code:java}
> metadataTable.putWithBatch(batchOperation, bytesUsedKey(), getBytesUsed());   
>     // use a wrapper helper
> metadataTable.putWithBatch(batchOperation, blockCountKey(), getKeyCount() - 
> deletedBlockCount);
> metadataTable.putWithBatch(batchOperation, pendingDeleteBlockCountKey(),
>                            (long)(getNumPendingDeletionBlocks() - 
> deletedBlockCount)); {code}
>  
> So we wrap the schema check and add prefix logic into a helper function above.
> These helper function will return a formatted key with prefix for schema v3 
> and the original const for schema v1 & v2.
>  
> NOTE: This will be only be a refactoring patch that return the original 
> values for the keys for schema v1 & v2, after we introduced schema v3,
> we'll add logics to return keys with prefixes in future patches.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to