[
https://issues.apache.org/jira/browse/HDDS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16705153#comment-16705153
]
Bharat Viswanadham edited comment on HDDS-748 at 11/30/18 6:44 PM:
-------------------------------------------------------------------
Thank You [~elek] for the updated patch.
I will look into HDDS-864 to see how the codec's are added. Thanks for the info.
Test failures for TestContainerSmallFile, TestBlockDeletingService and
TestBlockData are taken care in HDDS-885.
TestOzoneConfigurationFields, is missing 2 fields in ozone-default.xml, will
open a bug for this, this is also handled in HDDS-885.
I see TestKeys.java testPutAndGetKeyWithDnRestart is failing with same error
for me on my local machine. Will check if it is an issue will open a bug to fix
this.
New patch almost LGTM.
Few minor comments:
# Javadoc is missing in Codec.java
# In some of the classes like OMMetadataManagerImpl.java, import reorder is
changed, is this intentional change?
was (Author: bharatviswa):
Thank You [~elek] for the updated patch.
I will look into HDDS-864 to see how the codec's are added. Thanks for the info.
Test failures for TestContainerSmallFile, TestBlockDeletingService and
TestBlockData are taken care in HDDS-885.
TestOzoneConfigurationFields, is missing 2 fields in ozone-default.xml, will
open a bug for this.
I see TestKeys.java testPutAndGetKeyWithDnRestart is failing with same error
for me on my local machine. Will check if it is an issue will open a bug to fix
this.
New patch almost LGTM.
Few minor comments:
# Javadoc is missing in Codec.java
# In some of the classes like OMMetadataManagerImpl.java, import reorder is
changed, is this intentional change?
> Use strongly typed metadata Table implementation
> ------------------------------------------------
>
> Key: HDDS-748
> URL: https://issues.apache.org/jira/browse/HDDS-748
> Project: Hadoop Distributed Data Store
> Issue Type: Improvement
> Reporter: Elek, Marton
> Assignee: Elek, Marton
> Priority: Major
> Attachments: HDDS-748.001.patch, HDDS-748.002.patch,
> HDDS-748.003.patch, HDDS-748.004.patch
>
>
> NOTE: This issue is a proposal. I assigned it to myself to make it clear that
> it's not ready to implement, I just start a discussion about the proposed
> change.
> org.apache.hadoop.utils.db.DBStore (from HDDS-356) is a new generation
> MetadataStore to store all persistent state of hdds/ozone scm/om/datanodes.
> It supports column families with via the Table interface which supports
> methods like:
> {code:java}
> byte[] get(byte[] key) throws IOException;
> void put(byte[] key, byte[] value)
> {code}
> In our current code we usually use static helpers to do the _byte[] ->
> object_ and _object -> byte[]_ conversion with protobuf.
> For example in KeyManagerImpl the OmKeyInfo.getFromProtobuf is used multiple
> times to deserialize the OmKeyInfo project.
>
> *I propose to create a type-safe table* with using:
> {code:java}
> public interface Table<KEY_TYPE, VALUE_TYPE> extends AutoCloseable
> {code}
> The put and get could be modified to:
> {code:java}
> VALUE_TYPE[] get(KEY_TYPE[] key) throws IOException;
> void put(KEY_TYPE[] key, VALUE_TYPE value)
> {code}
> For example for the key table it could be:
> {code:java}
> OmKeyInfo get(String key) throws IOException;
> void put(String key, OmKeyInfo value)
> {code}
>
> It requires to register internal codec (marshaller/unmarshaller)
> implementations during the creation of (..)Table.
> The registration of the codecs would be optional. Without it the Table could
> work as now (using byte[],byte[])
> *Advantages*:
> * More simplified code (Don't need to repeat the serialization everywhere)
> less error-prone.
> * Clear separation of the layers (As of now I can't see the serialization
> overhead with OpenTracing) and measurablity). Easy to test different
> serialization in the future.
> * Easier to create additional developer tools to investigate the current
> state of the rocksdb metadata stores. We had SQLCLI to export all the data to
> sql, but with registering the format in the rocksdb table we can easily
> create a calcite based SQL console.
> *Additional info*:
> I would modify the interface of the DBStoreBuilder and DBStore:
> {code:java}
> this.store = DBStoreBuilder.newBuilder(conf)
> .setName(OM_DB_NAME)
> .setPath(Paths.get(metaDir.getPath()))
> .addTable(KEY_TABLE, DBUtil.STRING_KEY_CODEC, new OmKeyInfoCoder())
> //...
> .build();
> {code}
> And using it from the DBStore:
> {code:java}
> //default, without codec
> Table<byte[],byte[]> getTable(String name) throws IOException;
> //advanced with codec from the codec registry
> Table<String,OmKeyInfo> getTable(String name, Class keyType, Class valueType);
> //for example
> table.getTable(KEY_TABLE,String.class,OmKeyInfo.class);
> //or
> table.getTable(KEY_TABLE,String.class,UserInfo.class)
> //exception is thrown: No codec is registered for KEY_TABLE with type
> UserInfo.{code}
> *Priority*:
> I think it's a very useful and valuable step forward but the real priority is
> lower. Ideal for new contributors especially as it's independent, standalone
> part of ozone code.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]