[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711936#comment-16711936
 ] 

Hudson commented on HDDS-864:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15569 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15569/])
HDDS-864. Use strongly typed codec implementations for the tables of the 
(bharat: rev 343aaea2d12da0154273ff5f6eedc1ea5fae84cb)
* (edit) hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Codec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/CodecRegistry.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmBucketInfoCodec.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/VolumeListCodec.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStoreBuilder.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/package-info.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmVolumeArgsCodec.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmKeyInfoCodec.java


> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711931#comment-16711931
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Thank You, [~elek] for the updated patch.

Test failure is not related to the patch. I think TestKeys.java is failing on 
Jenkins, as I see on other jiras too. I think it is a flaky unit test. 

+1 LGTM.

I will commit this shortly.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711332#comment-16711332
 ] 

Hadoop QA commented on HDDS-864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} root: The patch generated 0 new + 1 unchanged - 3 
fixed = 1 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m  2s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
45s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950813/HDDS-864.004.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 4c1c11da7e2a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 5d4a432 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1883/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1883/testReport/ |
| Max. process+thread count | 1286 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1883/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, 

[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-06 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711200#comment-16711200
 ] 

Elek, Marton commented on HDDS-864:
---

Thank you very much [~bharatviswa] the help to identify the remaining unit test 
problems. It was a big help.

bq. 1. unused VolumeManagerImpl.java

Removed. Note: I am very surprised that checkstyle enables it without any 
warning...

bq. VolumeManagerImpl problems

thanks. They are fixed. For the creation time I modified the 
OmVolumeArgs.Builder(). if creationDate is not set in the builder I will use 
the current time.

bq. Null check.

I added the main null check logic to the CodecRegistry. CodecRegistry asObject 
can return with null if the byte array is null.

With this modification, it's not possible to get any null in any of the codec, 
but (just for the safety) I added Preconditions.checkNotNull to both of the 
toPersistedFormat and fromPersistedFormat methods (just to get more meaningful 
error message, but it should be impossible)


Will upload the patch soon after tested locally...

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710668#comment-16710668
 ] 

Bharat Viswanadham commented on HDDS-864:
-

{quote}As this patch is big enough I propose to do it in a next (smaller) patch.
{quote}
I am fine with doing for S3Table in further Jira's.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710667#comment-16710667
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Thank You [~elek]

for the updated patch.

Patch almost LGTM.

Few comments I have are:
 # 
VolumeManagerImpl.java:

Line 246: VolumeInfo newVolumeInfo = newVolumeArgs.getProtobuf(); can be 
removed, as it is not being used.
 
Line 247: metadataManager.getVolumeTable().put(dbVolumeKey, volumeArgs);
Here it should be updated as below
metadataManager.getVolumeTable().put(dbVolumeKey, newVolumeArgs);
  # 
In createVolume(), we are writing args in to volumeTable

metadataManager.getVolumeTable().putWithBatch(batch,
    dbVolumeKey, args);
But we are not setting creation time, as the original args will not have 
creation time set. That is the reason for some of tests failure in 
TestOzoneRpcClient
  # 
For OmKeyInfoCodec and other in toPersistedFormat(), As discussed offline, we 
can have Precondition check for null, as technically we should not get Null 
over there (This will also help in identifying bugs if we are passing null 
values). Have a look into it we can add here. 
 # 
I think 1, 2 and if it is fixed some of the Unit tests will be fixed. Have a 
look into other tests, if those are real issues.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710528#comment-16710528
 ] 

Hadoop QA commented on HDDS-864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} root: The patch generated 0 new + 1 unchanged - 3 
fixed = 1 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 37s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
47s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.client.rest.TestOzoneRestClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.om.TestOzoneManager |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950741/HDDS-864.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 8b0382b9ae60 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 228156c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/testReport/ |
| Max. process+thread count | 1128 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1880/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: 

[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-05 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710361#comment-16710361
 ] 

Elek, Marton commented on HDDS-864:
---

Thanks the detailed review [~bharatviswa]

bq. We have not converted getS3Table()is this intentional or any reason for it 
to leave?

No exact reason. I started with converting only the key/volume/buckets table in 
the first phase. After a while I realized that I can't do it without converting 
a few additional tables (eg. open key). S3 table was independent. 

As this patch is big enough I propose to do it in a next (smaller) patch.

All the others will be part the next patch. Right now I am fixing the 
remaining, will be uploaded soon...

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-03 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707916#comment-16707916
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Adding few more comments:

1. getVolumeKey() appending volume name is missing.

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-03 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16707714#comment-16707714
 ] 

Bharat Viswanadham commented on HDDS-864:
-

Hi [~elek]

Thanks for the contribution.

Few comments I have:
 # 
Unused imports in Codec.java
 # 
Javadoc is missing for add() in CodecRegistry.
 # 
We have not converted getS3Table()is this intentional change?
 # 
Can we move all the codec implementations to codec package?
 # 
Unused import in OmBucketCodecInfo.java.
 # 
OmKeyInfo.java, the constructor is changed to public, I think we don’t need it.
 # 
Can we add @VisibleForTesting notion on the constructor not taking 
codecRegistry param?
 # 
Add null checks in toPersistedFormat and fromPersistedFormat. As sometimes 
these can be null, and throw NPE. (This is the cause for so many test failures)

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-12-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16706949#comment-16706949
 ] 

Hadoop QA commented on HDDS-864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} root: The patch generated 14 new + 3 unchanged - 
1 fixed = 17 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 39s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 27s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.om.TestBucketManagerImpl |
|   | hadoop.ozone.om.TestKeyManagerImpl |
|   | hadoop.ozone.om.TestKeyDeletingService |
|   | hadoop.ozone.om.TestS3BucketManager |
|   | hadoop.ozone.container.keyvalue.TestKeyValueBlockIterator |
|   | hadoop.ozone.container.common.impl.TestHddsDispatcher |
|   | hadoop.ozone.container.keyvalue.TestBlockManagerImpl |
|   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950162/HDDS-864.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 6e7931f6f1dd 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 3044b78 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1860/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1860/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1860/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1860/testReport/ |
| Max. process+thread count | 188 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1860/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2018-11-21 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16694764#comment-16694764
 ] 

Elek, Marton commented on HDDS-864:
---

001 patch can compile but still Work In progress. Should be tested. But feel 
free to comment...

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-864.001.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org