[jira] [Updated] (HDDS-1067) freon run on client gets hung when two of the datanodes are down in 3 datanode cluster

2019-03-28 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1067:
---
Target Version/s: 0.5.0

> freon run on client gets hung when two of the datanodes are down in 3 
> datanode cluster
> --
>
> Key: HDDS-1067
> URL: https://issues.apache.org/jira/browse/HDDS-1067
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: stack_file.txt
>
>
> steps taken :
> 
>  # created 3 node docker cluster.
>  # wrote a key
>  # created partition such that 2 out of 3 datanodes cannot communicate with 
> any other node.
>  # Third datanode can communicate with scm, om and the client.
>  # ran freon to write key
> Observation :
> -
> freon run is hung. There is no timeout.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13833) Improve BlockPlacementPolicyDefault's consider load logic

2019-03-28 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804600#comment-16804600
 ] 

Brahma Reddy Battula commented on HDFS-13833:
-

Good catch!! can we backport to other branches also.

> Improve BlockPlacementPolicyDefault's consider load logic
> -
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13833.001.patch, HDFS-13833.002.patch, 
> HDFS-13833.003.patch, HDFS-13833.004.patch, HDFS-13833.005.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at 

[jira] [Commented] (HDFS-14396) Failed to load image from FSImageFile when downgrade from 3.x to 2.x

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804592#comment-16804592
 ] 

Hadoop QA commented on HDFS-14396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964122/HDFS-14396.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c55e3f27f4f 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d7a2f94 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26543/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26543/testReport/ |
| Max. process+thread count | 2683 (vs. 

[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Attachment: HDDS-1351.002.patch

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch, HDDS-1351.002.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220397
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 29/Mar/19 05:02
Start Date: 29/Mar/19 05:02
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#issuecomment-477868451
 
 
   Actually we usually only need one PR for trunk and then cherry-pick the 
change to ozone-0.4.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220397)
Time Spent: 1h 50m  (was: 1h 40m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220393=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220393
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 29/Mar/19 04:59
Start Date: 29/Mar/19 04:59
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#issuecomment-477867857
 
 
   > +1. Just notice #660 is for ozone-0.4 and #659 is for trunk.
   > Why we need different dependencies?
   
   Nice catch.
   
   I was using different compose files for different branches.  We need the 
other 2 dependencies for jdk11 for trunk, too.
   
   Pushed additional commit to the other PR.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220393)
Time Spent: 1h 40m  (was: 1.5h)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1164) Add New blockade Tests to test Replica Manager

2019-03-28 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-1164:
-
Attachment: HDDS-1164.002.patch

> Add New blockade Tests to test Replica Manager
> --
>
> Key: HDDS-1164
> URL: https://issues.apache.org/jira/browse/HDDS-1164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
>  Labels: postpone-to-craterlake
> Attachments: HDDS-1164.001.patch, HDDS-1164.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1352) Remove unused call in TestStorageContainerManagerHttpServer

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804565#comment-16804565
 ] 

Hadoop QA commented on HDDS-1352:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 19s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  0s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2599/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1352 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964100/HDDS-1352.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 4de79e1b206c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / d7a2f94 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2599/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2599/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 

[jira] [Updated] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1189:
--
Attachment: (was: HDDS-1189.03.patch)

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1189:
--
Attachment: HDDS-1189.03.patch

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-28 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804551#comment-16804551
 ] 

Feilong He commented on HDFS-14355:
---

OK. Thanks for your help. I will do that immediately.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220369
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 29/Mar/19 03:01
Start Date: 29/Mar/19 03:01
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#issuecomment-477848297
 
 
   +1 I will commit this shortly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220369)
Time Spent: 1h 20m  (was: 1h 10m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220372=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220372
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 29/Mar/19 03:04
Start Date: 29/Mar/19 03:04
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#issuecomment-477848297
 
 
   +1. Just notice #660 is for ozone-0.4 and #659 is for trunk. 
   Why we need different dependencies? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220372)
Time Spent: 1.5h  (was: 1h 20m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804531#comment-16804531
 ] 

Hadoop QA commented on HDDS-1189:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-ozone: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  0s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  8s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2598/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1189 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964119/HDDS-1189.03.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 

[jira] [Updated] (HDDS-1352) Remove unused call in TestStorageContainerManagerHttpServer

2019-03-28 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1352:

Status: Patch Available  (was: Open)

> Remove unused call in TestStorageContainerManagerHttpServer
> ---
>
> Key: HDDS-1352
> URL: https://issues.apache.org/jira/browse/HDDS-1352
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDDS-1352.001.patch
>
>
> Remove unused call to InetSocketAddress.createUnresolved() in 
> TestStorageContainerManagerHttpServer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14392) Backport HDFS-9787 to branch-2

2019-03-28 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804526#comment-16804526
 ] 

Chen Liang commented on HDFS-14392:
---

I ran the failed test locally and passed as well, seems unrelated. +1 to v000 
patch, I've committed to branch-2. Thanks [~csun]!

> Backport HDFS-9787 to branch-2
> --
>
> Key: HDFS-14392
> URL: https://issues.apache.org/jira/browse/HDFS-14392
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-14392-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HDFS-9787.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14392) Backport HDFS-9787 to branch-2

2019-03-28 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14392:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Backport HDFS-9787 to branch-2
> --
>
> Key: HDFS-14392
> URL: https://issues.apache.org/jira/browse/HDFS-14392
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-14392-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HDFS-9787.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14398) Update HAState.java and modify the typos.

2019-03-28 Thread bianqi (JIRA)
bianqi created HDFS-14398:
-

 Summary: Update HAState.java and modify the typos.
 Key: HDFS-14398
 URL: https://issues.apache.org/jira/browse/HDFS-14398
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: bianqi


https://github.com/apache/hadoop/pull/644



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14396) Failed to load image from FSImageFile when downgrade from 3.x to 2.x

2019-03-28 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14396:
---
Attachment: HDFS-14396.002.patch

> Failed to load image from FSImageFile when downgrade from 3.x to 2.x
> 
>
> Key: HDFS-14396
> URL: https://issues.apache.org/jira/browse/HDFS-14396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14396.001.patch, HDFS-14396.002.patch
>
>
> After fixing HDFS-13596, try to downgrade from 3.x to 2.x. But namenode can't 
> start because exception occurs. The message follows
> {code:java}
> 2019-01-23 17:22:18,730 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Failed to load image from 
> FSImageFile(file=/data1/hadoopdata/hadoop-namenode/current/fsimage_0025310,
>  cpktTxId=00
> 25310)
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:869)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:742)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:673)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:672)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:839)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1517)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1583)
> 2019-01-23 17:22:19,023 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: Failed to load FSImage file, see error(s) above for more 
> info.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> {code}
> This issue occurs because 3.x namenode saves image with EC fields during 
> upgrade
> Try to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14396) Failed to load image from FSImageFile when downgrade from 3.x to 2.x

2019-03-28 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804523#comment-16804523
 ] 

Fei Hui commented on HDFS-14396:


Upload v002 patch without UT

> Failed to load image from FSImageFile when downgrade from 3.x to 2.x
> 
>
> Key: HDFS-14396
> URL: https://issues.apache.org/jira/browse/HDFS-14396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14396.001.patch, HDFS-14396.002.patch
>
>
> After fixing HDFS-13596, try to downgrade from 3.x to 2.x. But namenode can't 
> start because exception occurs. The message follows
> {code:java}
> 2019-01-23 17:22:18,730 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Failed to load image from 
> FSImageFile(file=/data1/hadoopdata/hadoop-namenode/current/fsimage_0025310,
>  cpktTxId=00
> 25310)
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:885)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:869)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:742)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:673)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:672)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:839)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1517)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1583)
> 2019-01-23 17:22:19,023 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: Failed to load FSImage file, see error(s) above for more 
> info.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:688)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:998)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:612)
> {code}
> This issue occurs because 3.x namenode saves image with EC fields during 
> upgrade
> Try to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider

2019-03-28 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804514#comment-16804514
 ] 

Konstantin Shvachko commented on HDFS-14245:


ORPP requires {{ClientProtocol}}, since it needs {{getHAServiceState()}} 
internally.
{{GetUserMappingsProtocol}} has only one method, and so cannot help discovering 
NameNode states.
For {{GetGroups}} it should have been implemented as a part of admin commands 
instead of a separate {{Tool}}.
Oh well, I think going to ANN for groups which the patch does is fine and it 
fixes the problem.

> Class cast error in GetGroups with ObserverReadProxyProvider
> 
>
> Key: HDFS-14245
> URL: https://issues.apache.org/jira/browse/HDFS-14245
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-12943
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HDFS-14245.patch
>
>
> Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as :
> {code:java}
> Exception in thread "main" java.io.IOException: Couldn't create proxy 
> provider class 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95)
>  at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87)
>  at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96)
> Caused by: java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245)
>  ... 7 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be 
> cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112)
>  ... 12 more
> {code}
> similar with HDFS-14116, we did a simple fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220351=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220351
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:42
Start Date: 29/Mar/19 01:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-477833013
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 963 | trunk passed |
   | +1 | compile | 23 | trunk passed |
   | +1 | mvnsite | 24 | trunk passed |
   | +1 | shadedclient | 644 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 16 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | +1 | compile | 20 | the patch passed |
   | +1 | javac | 20 | the patch passed |
   | +1 | mvnsite | 20 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 13 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 2681 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux 62659a6c688e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d7a2f94 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220351)
Time Spent: 5.5h  (was: 5h 20m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220350
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:42
Start Date: 29/Mar/19 01:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270256929
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -72,13 +72,20 @@ execute_tests(){
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3
   wait_for_datanodes "$COMPOSE_FILE"
+
+  if [ ${COMPOSE_DIR} == "ozonesecure" ]; then
+   SECURITY_ENABLED="true"
+  else
+   SECURITY_ENABLED="false"
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220350)
Time Spent: 5h 20m  (was: 5h 10m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220348=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220348
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:42
Start Date: 29/Mar/19 01:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270256924
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -13,9 +13,15 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-*** Keywords ***
+*** Settings ***
+Library OperatingSystem
+Library String
+Library BuiltIn
 
+*** Variables ***
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220348)
Time Spent: 5h  (was: 4h 50m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220349=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220349
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:42
Start Date: 29/Mar/19 01:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270256926
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation   Smoke test to start cluster with docker-compose 
environments.
+Library OperatingSystem
+Library String
+Library BuiltIn
+Resource../commonlib.robot
+Resource../s3/commonawslib.robot
+
+*** Variables ***
+${ENDPOINT_URL} http://s3g:9878
+
+*** Keywords ***
+Setup volume names
+${random}Generate Random String  2   [NUMBERS]
+Set Suite Variable   ${volume1}fstest${random}
+Set Suite Variable   ${volume2}fstest2${random}
+
+*** Test Cases ***
+Secure S3 test Success
+Run Keyword Setup s3 tests
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} list-buckets
+Should contain   ${output} bucket-test123
+
+Secure S3 test Failure
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220349)
Time Spent: 5h 10m  (was: 5h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220342=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220342
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:28
Start Date: 29/Mar/19 01:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #651: HDDS-1339. 
Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270255127
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 ##
 @@ -534,4 +536,84 @@ public void testReadRequest() throws Exception {
   proxyProvider.getCurrentProxyOMNodeId());
 }
   }
+
+  @Test
+  public void testOMRatisSnapshot() throws Exception {
+String userName = "user" + RandomStringUtils.randomNumeric(5);
+String adminName = "admin" + RandomStringUtils.randomNumeric(5);
+String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
+
+VolumeArgs createVolumeArgs = VolumeArgs.newBuilder()
+.setOwner(userName)
+.setAdmin(adminName)
+.build();
+
+objectStore.createVolume(volumeName, createVolumeArgs);
+OzoneVolume retVolumeinfo = objectStore.getVolume(volumeName);
+
+retVolumeinfo.createBucket(bucketName);
+OzoneBucket ozoneBucket = retVolumeinfo.getBucket(bucketName);
+
+String leaderOMNodeId = objectStore.getClientProxy().getOMProxyProvider()
+.getCurrentProxyOMNodeId();
+OzoneManager ozoneManager = cluster.getOzoneManager(leaderOMNodeId);
+
+// Send commands to ratis to increase the log index so that ratis
+// triggers a snapshot on the state machine.
+
+long appliedLogIndex = 0;
+while (appliedLogIndex <= SNAPSHOT_THRESHOLD) {
+  createKey(ozoneBucket);
+  appliedLogIndex = ozoneManager.getOmRatisServer()
+  .getStateMachineLastAppliedIndex();
+}
+
+GenericTestUtils.waitFor(() -> {
+  if (ozoneManager.loadRatisSnapshotIndex() > 0) {
+return true;
+  }
+  return false;
+}, 1000, 10);
+
+// The current lastAppliedLogIndex on the state machine should be greater
+// than or equal to the saved snapshot index.
+long smLastAppliedIndex =
+ozoneManager.getOmRatisServer().getStateMachineLastAppliedIndex();
+long ratisSnapshotIndex = ozoneManager.loadRatisSnapshotIndex();
+Assert.assertTrue("LastAppliedIndex on OM State Machine ("
++ smLastAppliedIndex + ") is less than the saved snapshot index("
++ ratisSnapshotIndex + ").",
+smLastAppliedIndex >= ratisSnapshotIndex);
+
+// Add more transactions to Ratis to trigger another snapshot
+while (appliedLogIndex <= (smLastAppliedIndex + SNAPSHOT_THRESHOLD)) {
+  createKey(ozoneBucket);
+  appliedLogIndex = ozoneManager.getOmRatisServer()
+  .getStateMachineLastAppliedIndex();
+}
+
+GenericTestUtils.waitFor(() -> {
+  if (ozoneManager.loadRatisSnapshotIndex() > 0) {
+return true;
+  }
+  return false;
+}, 1000, 10);
+
+// The new snapshot index must be greater than the previous snapshot index
+long ratisSnapshotIndexNew = ozoneManager.loadRatisSnapshotIndex();
+Assert.assertTrue("Latest snapshot index must be greater than previous " +
+"snapshot indices", ratisSnapshotIndexNew > ratisSnapshotIndex);  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220342)
Time Spent: 2h 10m  (was: 2h)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220343=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220343
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 29/Mar/19 01:28
Start Date: 29/Mar/19 01:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #651: HDDS-1339. 
Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-477830367
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 975 | trunk passed |
   | +1 | compile | 936 | trunk passed |
   | +1 | checkstyle | 229 | trunk passed |
   | -1 | mvnsite | 54 | integration-test in trunk failed. |
   | +1 | shadedclient | 1140 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 196 | trunk passed |
   | +1 | javadoc | 160 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 25 | integration-test in the patch failed. |
   | +1 | compile | 881 | the patch passed |
   | +1 | javac | 881 | the patch passed |
   | +1 | checkstyle | 188 | the patch passed |
   | +1 | mvnsite | 170 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 673 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 223 | the patch passed |
   | +1 | javadoc | 159 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 74 | common in the patch passed. |
   | +1 | unit | 47 | common in the patch passed. |
   | -1 | unit | 597 | integration-test in the patch failed. |
   | +1 | unit | 57 | ozone-manager in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7018 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/651 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux d097c6508e41 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d7a2f94 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/testReport/ |
   | Max. process+thread count | 4099 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-651/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Comment Edited] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804037#comment-16804037
 ] 

Siddharth Wagle edited comment on HDDS-1189 at 3/29/19 1:09 AM:


Hi [~linyiqun], thanks for the first review.

- Adding the javadoc for better understanding to JooqCodeGenerator. The main 
method of JooqCodeGenerator is called by maven lifecycle to generate code.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. With this patch, anybody 
who wants to add a table gets POJOs and DAO for free with only need to add 1 
function call in the ShemaDefinition.


was (Author: swagle):
Hi [~linyiqun], thanks for the first review.

- Adding the javadoc for better understanding to JooqCodeGenerator. The main 
method of JooqCodeGenerator is called by maven lifecycle to generate code.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. Anybody who wants to add 
a table gets POJOs and DAO for free with only need to add 1 function call in 
the ShemaDefinition.

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804496#comment-16804496
 ] 

Siddharth Wagle commented on HDDS-1189:
---

03 => Addressed comments from [~linyiqun] and fix checkstyle issues.

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804496#comment-16804496
 ] 

Siddharth Wagle edited comment on HDDS-1189 at 3/29/19 1:07 AM:


03 => Addressed comments from [~linyiqun] and fixed checkstyle issues.


was (Author: swagle):
03 => Addressed comments from [~linyiqun] and fix checkstyle issues.

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1189:
--
Attachment: HDDS-1189.03.patch

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, 
> HDDS-1189.03.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-03-28 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804493#comment-16804493
 ] 

Fei Hui commented on HDFS-13596:


[~xkrogen]
I think hdfs upgrading steps follow, and during upgrading, will not call EC RPC
# Before upgrading hdfs client version is 2.x, namenode & datanode version is 
2.x
# During upgrading hdfs client version is 2.x, namenode & datanode version is 
2.x
# After finalized, hdfs client version is 2.x , namenode & datanode version is 
3.x
# Upgrade hdfs client from 2.x to 3.x

{quote}
Can you explain / provider a pointer to how EC is not supported during the 
upgrade?
{quote}
Get your point, maybe EC RPC call should be checked whether EC is supported ?

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:298)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> 

[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220331
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 00:25
Start Date: 29/Mar/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270246020
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -13,9 +13,15 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-*** Keywords ***
+*** Settings ***
+Library OperatingSystem
+Library String
+Library BuiltIn
 
+*** Variables ***
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220331)
Time Spent: 4h 20m  (was: 4h 10m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220334
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 00:25
Start Date: 29/Mar/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-477818901
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 964 | trunk passed |
   | +1 | compile | 25 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 608 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | +1 | compile | 17 | the patch passed |
   | +1 | javac | 17 | the patch passed |
   | +1 | mvnsite | 18 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 13 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 20 | dist in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2607 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux dbbc27fea525 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d7a2f94 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220334)
Time Spent: 4h 50m  (was: 4h 40m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220332
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 00:25
Start Date: 29/Mar/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270246025
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation   Smoke test to start cluster with docker-compose 
environments.
+Library OperatingSystem
+Library String
+Library BuiltIn
+Resource../commonlib.robot
+Resource../s3/commonawslib.robot
+
+*** Variables ***
+${ENDPOINT_URL} http://s3g:9878
+
+*** Keywords ***
+Setup volume names
+${random}Generate Random String  2   [NUMBERS]
+Set Suite Variable   ${volume1}fstest${random}
+Set Suite Variable   ${volume2}fstest2${random}
+
+*** Test Cases ***
+Secure S3 test Success
+Run Keyword Setup s3 tests
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} list-buckets
+Should contain   ${output} bucket-test123
+
+Secure S3 test Failure
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220332)
Time Spent: 4.5h  (was: 4h 20m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220333=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220333
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 29/Mar/19 00:25
Start Date: 29/Mar/19 00:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270246027
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -72,13 +72,20 @@ execute_tests(){
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3
   wait_for_datanodes "$COMPOSE_FILE"
+
+  if [ "${COMPOSE_DIR}" == "ozonesecure" ]; then
+   SECURITY_ENABLED="true"
+  else
+   SECURITY_ENABLED="false"
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220333)
Time Spent: 4h 40m  (was: 4.5h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220318=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220318
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:32
Start Date: 28/Mar/19 23:32
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270237034
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -115,7 +117,60 @@ public TransactionContext startTransaction(
   return ctxt;
 }
 return handleStartTransactionRequests(raftClientRequest, omRequest);
+  }
+
+  /*
+   * Apply a committed log entry to the state machine.
+   */
+  @Override
+  public CompletableFuture applyTransaction(TransactionContext trx) {
+try {
+  OMRequest request = OMRatisHelper.convertByteStringToOMRequest(
+  trx.getStateMachineLogEntry().getLogData());
+  long trxLogIndex = trx.getLogEntry().getIndex();
+  CompletableFuture future = CompletableFuture
+  .supplyAsync(() -> runCommand(request, trxLogIndex));
+  return future;
+} catch (IOException e) {
+  return completeExceptionally(e);
+}
+  }
+
+  /**
+   * Query the state machine. The request must be read-only.
+   */
+  @Override
+  public CompletableFuture query(Message request) {
+try {
+  OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest(
+  request.getContent());
+  return CompletableFuture.completedFuture(queryCommand(omRequest));
+} catch (IOException e) {
+  return completeExceptionally(e);
+}
+  }
+
+  /**
+   * Take OM Ratis snapshot. Write the snapshot index to file. Snapshot index
+   * is the log index corresponding to the last applied transaction on the OM
+   * State Machine.
+   *
+   * @return the last applied index on the state machine which has been
+   * stored in the snapshot file.
+   */
+  @Override
+  public long takeSnapshot() throws IOException {
+LOG.info("Saving Ratis snapshot on the OM.");
+return ozoneManager.saveRatisSnapshot();
 
 Review comment:
   done. flushing the DB before saving a snapshot.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220318)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220319=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220319
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:32
Start Date: 28/Mar/19 23:32
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #651: HDDS-1339. 
Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#issuecomment-477808698
 
 
   Thank you Bharat for the review. I have updated the patch to address your 
comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220319)
Time Spent: 2h  (was: 1h 50m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220315
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:31
Start Date: 28/Mar/19 23:31
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270236900
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
 ##
 @@ -534,4 +536,61 @@ public void testReadRequest() throws Exception {
   proxyProvider.getCurrentProxyOMNodeId());
 }
   }
+
+  @Test
+  public void testOMRatisSnapshot() throws Exception {
+String userName = "user" + RandomStringUtils.randomNumeric(5);
+String adminName = "admin" + RandomStringUtils.randomNumeric(5);
+String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
+
+VolumeArgs createVolumeArgs = VolumeArgs.newBuilder()
+.setOwner(userName)
+.setAdmin(adminName)
+.build();
+
+objectStore.createVolume(volumeName, createVolumeArgs);
+OzoneVolume retVolumeinfo = objectStore.getVolume(volumeName);
+
+retVolumeinfo.createBucket(bucketName);
+OzoneBucket ozoneBucket = retVolumeinfo.getBucket(bucketName);
+
+String leaderOMNodeId = objectStore.getClientProxy().getOMProxyProvider()
+.getCurrentProxyOMNodeId();
+OzoneManager ozoneManager = cluster.getOzoneManager(leaderOMNodeId);
+
+// Send commands to ratis to increase the log index so that ratis
+// triggers a snapshot on the state machine.
+
+long appliedLogIndex = 0;
+while (appliedLogIndex <= SNAPSHOT_THRESHOLD) {
+  String keyName = "key" + RandomStringUtils.randomNumeric(5);
+  String data = "data" + RandomStringUtils.randomNumeric(5);
+  OzoneOutputStream ozoneOutputStream = ozoneBucket.createKey(keyName,
+  data.length(), ReplicationType.STAND_ALONE,
+  ReplicationFactor.ONE, new HashMap<>());
+  ozoneOutputStream.write(data.getBytes(), 0, data.length());
+  ozoneOutputStream.close();
+
+  appliedLogIndex = ozoneManager.getOmRatisServer()
+  .getStateMachineLastAppliedIndex();
+}
+
+GenericTestUtils.waitFor(() -> {
+  if (ozoneManager.loadRatisSnapshotIndex() > 0) {
+return true;
+  }
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220315)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220317=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220317
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:31
Start Date: 28/Mar/19 23:31
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270236979
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -308,56 +357,35 @@ private IOException constructExceptionForFailedRequest(
 STATUS_CODE + omResponse.getStatus());
   }
 
-  /*
-   * Apply a committed log entry to the state machine.
-   */
-  @Override
-  public CompletableFuture applyTransaction(TransactionContext trx) {
-try {
-  OMRequest request = OMRatisHelper.convertByteStringToOMRequest(
-  trx.getStateMachineLogEntry().getLogData());
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(request));
-  return future;
-} catch (IOException e) {
-  return completeExceptionally(e);
-}
-  }
-
   /**
-   * Query the state machine. The request must be read-only.
+   * Submits write request to OM and returns the response Message.
+   * @param request OMRequest
+   * @return response from OM
+   * @throws ServiceException
*/
-  @Override
-  public CompletableFuture query(Message request) {
-try {
-  OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest(
-  request.getContent());
-  return CompletableFuture.completedFuture(runCommand(omRequest));
-} catch (IOException e) {
-  return completeExceptionally(e);
+  private Message runCommand(OMRequest request, long trxLogIndex) {
+OMResponse response = handler.handle(request);
+if (response.getSuccess()) {
 
 Review comment:
   done. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220317)
Time Spent: 1h 40m  (was: 1.5h)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220316=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220316
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:31
Start Date: 28/Mar/19 23:31
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270236979
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
 ##
 @@ -308,56 +357,35 @@ private IOException constructExceptionForFailedRequest(
 STATUS_CODE + omResponse.getStatus());
   }
 
-  /*
-   * Apply a committed log entry to the state machine.
-   */
-  @Override
-  public CompletableFuture applyTransaction(TransactionContext trx) {
-try {
-  OMRequest request = OMRatisHelper.convertByteStringToOMRequest(
-  trx.getStateMachineLogEntry().getLogData());
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(request));
-  return future;
-} catch (IOException e) {
-  return completeExceptionally(e);
-}
-  }
-
   /**
-   * Query the state machine. The request must be read-only.
+   * Submits write request to OM and returns the response Message.
+   * @param request OMRequest
+   * @return response from OM
+   * @throws ServiceException
*/
-  @Override
-  public CompletableFuture query(Message request) {
-try {
-  OMRequest omRequest = OMRatisHelper.convertByteStringToOMRequest(
-  request.getContent());
-  return CompletableFuture.completedFuture(runCommand(omRequest));
-} catch (IOException e) {
-  return completeExceptionally(e);
+  private Message runCommand(OMRequest request, long trxLogIndex) {
+OMResponse response = handler.handle(request);
+if (response.getSuccess()) {
 
 Review comment:
   done. flushing the DB before saving a snapshot.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220316)
Time Spent: 1.5h  (was: 1h 20m)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1352) Remove unused call in TestStorageContainerManagerHttpServer

2019-03-28 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804431#comment-16804431
 ] 

Arpit Agarwal commented on HDDS-1352:
-

+1 pending Jenkins.

> Remove unused call in TestStorageContainerManagerHttpServer
> ---
>
> Key: HDDS-1352
> URL: https://issues.apache.org/jira/browse/HDDS-1352
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDDS-1352.001.patch
>
>
> Remove unused call to InetSocketAddress.createUnresolved() in 
> TestStorageContainerManagerHttpServer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220309=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220309
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:20
Start Date: 28/Mar/19 23:20
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#discussion_r270234817
 
 

 ##
 File path: hadoop-ozone/tools/pom.xml
 ##
 @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   hadoop-hdfs
   compile
 
+
+  com.sun.xml.bind
+  jaxb-core
+
+
+  javax.xml.bind
 
 Review comment:
   Agree, since you don't add jaxb-impl. We should be good. There is no need to 
exclude.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220309)
Time Spent: 1h 10m  (was: 1h)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220303=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220303
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:13
Start Date: 28/Mar/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270233312
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -13,9 +13,15 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-*** Keywords ***
+*** Settings ***
+Library OperatingSystem
+Library String
+Library BuiltIn
 
+*** Variables ***
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220303)
Time Spent: 3h 40m  (was: 3.5h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220305=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220305
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:13
Start Date: 28/Mar/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270233324
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/test.sh
 ##
 @@ -72,13 +72,20 @@ execute_tests(){
   docker-compose -f "$COMPOSE_FILE" down
   docker-compose -f "$COMPOSE_FILE" up -d --scale datanode=3
   wait_for_datanodes "$COMPOSE_FILE"
+
+  if [ "${COMPOSE_DIR}" == "ozonesecure" ]; then
+   SECURITY_ENABLED="true"
+  else
+   SECURITY_ENABLED="false"
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220305)
Time Spent: 4h  (was: 3h 50m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220304=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220304
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:13
Start Date: 28/Mar/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r270233318
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-s3.robot
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation   Smoke test to start cluster with docker-compose 
environments.
+Library OperatingSystem
+Library String
+Library BuiltIn
+Resource../commonlib.robot
+Resource../s3/commonawslib.robot
+
+*** Variables ***
+${ENDPOINT_URL} http://s3g:9878
+
+*** Keywords ***
+Setup volume names
+${random}Generate Random String  2   [NUMBERS]
+Set Suite Variable   ${volume1}fstest${random}
+Set Suite Variable   ${volume2}fstest2${random}
+
+*** Test Cases ***
+Secure S3 test Success
+Run Keyword Setup s3 tests
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} create-bucket --bucket bucket-test123
+${output} = Execute  aws s3api --endpoint-url 
${ENDPOINT_URL} list-buckets
+Should contain   ${output} bucket-test123
+
+Secure S3 test Failure
 
 Review comment:
   whitespace:tabs in line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220304)
Time Spent: 3h 50m  (was: 3h 40m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220307=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220307
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:13
Start Date: 28/Mar/19 23:13
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#discussion_r270233396
 
 

 ##
 File path: hadoop-ozone/tools/pom.xml
 ##
 @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   hadoop-hdfs
   compile
 
+
+  com.sun.xml.bind
+  jaxb-core
+
+
+  javax.xml.bind
 
 Review comment:
   Hi @xiaoyuyao, thanks for the review.  This change adds 3 dependencies, but 
none of them is a transitive dependency via `hadoop-common`.  Can you please 
clarify what needs to be excluded and why?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220307)
Time Spent: 1h  (was: 50m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220306=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220306
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 28/Mar/19 23:13
Start Date: 28/Mar/19 23:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-477804402
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 987 | trunk passed |
   | +1 | compile | 68 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 673 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 21 | the patch passed |
   | +1 | javac | 21 | the patch passed |
   | +1 | mvnsite | 22 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | shelldocs | 19 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2856 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux 9a5cd7534c76 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d7a2f94 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220306)
Time Spent: 4h 10m  (was: 4h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14397) Backport HADOOP-15684 to branch-2

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804404#comment-16804404
 ] 

Hadoop QA commented on HDFS-14397:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 47 unchanged - 1 fixed = 47 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14397 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964085/HDFS-14397-branch-2.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4061e8ed1fd 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Comment Edited] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804037#comment-16804037
 ] 

Siddharth Wagle edited comment on HDDS-1189 at 3/28/19 10:59 PM:
-

Hi [~linyiqun], thanks for the first review.

- Adding the javadoc for better understanding to JooqCodeGenerator. The main 
method of JooqCodeGenerator is called by maven lifecycle to generate code.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. Anybody who wants to add 
a table gets POJOs and DAO for free with only need to add 1 function call in 
the ShemaDefinition.


was (Author: swagle):
Hi [~linyiqun], thanks for the first review.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- Adding the javadoc for better understanding to JooqCodeGenerator. The main 
method of JooqCodeGenerator is called by maven lifecycle to generate code.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. Anybody who wants to add 
a table gets POJOs and DAO for free with only need to add 1 function call in 
the ShemaDefinition.

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1189) Recon Aggregate DB schema and ORM

2019-03-28 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804037#comment-16804037
 ] 

Siddharth Wagle edited comment on HDDS-1189 at 3/28/19 10:58 PM:
-

Hi [~linyiqun], thanks for the first review.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- Adding the javadoc for better understanding to JooqCodeGenerator. The main 
method of JooqCodeGenerator is called by maven lifecycle to generate code.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. Anybody who wants to add 
a table gets POJOs and DAO for free with only need to add 1 function call in 
the ShemaDefinition.


was (Author: swagle):
Hi [~linyiqun], thanks for the first review.

- ReconSchemaDefinition is only DDL creation, all the CRUD operations are 
contained in the generated DAO objects. I will add a unit test to make this 
point clear.

- This is meant to be a skeleton patch that introduces all key components, 
specifically: codegen, ORM and transactions without introducing noise. I will 
add more unit tests, certainly, but want to keep it small enough to review and 
understand. The other tables will be added along with API and logic to populate 
utilization data, in short, more functional in nature. Anybody who wants to add 
a table gets POJOs and DAO for free with only need to add 1 function call in 
the ShemaDefinition.

> Recon Aggregate DB schema and ORM
> -
>
> Key: HDDS-1189
> URL: https://issues.apache.org/jira/browse/HDDS-1189
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch
>
>
> _Objectives_
> - Define V1 of the db schema for recon service
> - The current proposal is to use jOOQ as the ORM for SQL interaction. For two 
> main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, 
> b) Allows code to schema and schema to code seamless transition, critical for 
> creating DDL through the code and unit testing across versions of the 
> application.
> - Add e2e unit tests suite for Recon entities, created based on the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1352) Remove unused call in TestStorageContainerManagerHttpServer

2019-03-28 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDDS-1352:
-
Attachment: HDDS-1352.001.patch

> Remove unused call in TestStorageContainerManagerHttpServer
> ---
>
> Key: HDDS-1352
> URL: https://issues.apache.org/jira/browse/HDDS-1352
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDDS-1352.001.patch
>
>
> Remove unused call to InetSocketAddress.createUnresolved() in 
> TestStorageContainerManagerHttpServer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1352) Remove unused call in TestStorageContainerManagerHttpServer

2019-03-28 Thread Shweta (JIRA)
Shweta created HDDS-1352:


 Summary: Remove unused call in 
TestStorageContainerManagerHttpServer
 Key: HDDS-1352
 URL: https://issues.apache.org/jira/browse/HDDS-1352
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Shweta
Assignee: Shweta


Remove unused call to InetSocketAddress.createUnresolved() in 
TestStorageContainerManagerHttpServer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=220294=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220294
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 28/Mar/19 22:42
Start Date: 28/Mar/19 22:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-477797510
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 997 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | mvnsite | 22 | trunk passed |
   | +1 | shadedclient | 637 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 20 | dist in the patch failed. |
   | +1 | compile | 19 | the patch passed |
   | +1 | javac | 19 | the patch passed |
   | +1 | mvnsite | 20 | the patch passed |
   | -1 | shellcheck | 1 | The patch generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) |
   | +1 | shelldocs | 16 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch 3  line(s) with tabs. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 2742 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux d7bf174f633f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d7a2f94 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | shellcheck | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/diff-patch-shellcheck.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/artifact/out/whitespace-tabs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220294)
Time Spent: 3.5h  (was: 3h 20m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804377#comment-16804377
 ] 

Hadoop QA commented on HDDS-1351:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 15s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 16s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.om.TestScmChillMode |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2597/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1351 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964078/HDDS-1351.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux 3125ae1be124 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4cceeb2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2597/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2597/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2597/testReport/ |
| Max. process+thread count | 4388 (vs. 

[jira] [Commented] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804352#comment-16804352
 ] 

Hudson commented on HDDS-1350:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16301 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16301/])
HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contributed 
(bharat: rev d7a2f9456bb4800a63148e58960b5cf06840a27d)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestDatanodeStateMachine.java


> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220282=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220282
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 22:13
Start Date: 28/Mar/19 22:13
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#discussion_r270218938
 
 

 ##
 File path: hadoop-ozone/tools/pom.xml
 ##
 @@ -59,6 +59,18 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   hadoop-hdfs
   compile
 
+
+  com.sun.xml.bind
+  jaxb-core
+
+
+  javax.xml.bind
 
 Review comment:
   This needs to be excluded from hadoop-common dependency like below:
   
   
 org.apache.hadoop
 hadoop-common
 compile
 
   
 com.sun.xml.bind
 jaxb-impl
   
 
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220282)
Time Spent: 50m  (was: 40m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1351:
-
Affects Version/s: (was: 0.5.0)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1351:
-
Target Version/s: 0.4.0  (was: 0.4.0, 0.5.0)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1339) Implement Ratis Snapshots on OM

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1339?focusedWorklogId=220280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220280
 ]

ASF GitHub Bot logged work on HDDS-1339:


Author: ASF GitHub Bot
Created on: 28/Mar/19 22:08
Start Date: 28/Mar/19 22:08
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #651: 
HDDS-1339. Implement ratis snapshots on OM
URL: https://github.com/apache/hadoop/pull/651#discussion_r270217752
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1603,18 +1603,27 @@
   
 ozone.om.ratis.log.appender.queue.num-elements
 1024
-OZONE, DEBUG, CONTAINER, RATIS
+OZONE, DEBUG, OM, RATIS
 Number of operation pending with Raft's Log Worker.
 
   
   
 ozone.om.ratis.log.appender.queue.byte-limit
 32MB
-OZONE, DEBUG, CONTAINER, RATIS
+OZONE, DEBUG, OM, RATIS
 Byte limit for Raft's Log Worker queue.
 
   
 
+  
+ozone.om.ratis.snapshot.auto.trigger.threshold
+40L
 
 Review comment:
   This is the default in Ratis so used that. I was thinking we can update it 
after extensive testing. But I am open to suggestions.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220280)
Time Spent: 1h 10m  (was: 1h)

> Implement Ratis Snapshots on OM
> ---
>
> Key: HDDS-1339
> URL: https://issues.apache.org/jira/browse/HDDS-1339
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For bootstrapping and restarting OMs, we need to implement snapshots in OM. 
> The OM state maintained by RocksDB will be checkpoint-ed on demand. Ratis 
> snapshots will only preserve the last applied log index by the State Machine 
> on disk. This index will be stored in file in the OM metadata dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804343#comment-16804343
 ] 

Ajay Kumar commented on HDDS-1351:
--

+1

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1340) Add List Containers API for Recon

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?focusedWorklogId=220273=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220273
 ]

ASF GitHub Bot logged work on HDDS-1340:


Author: ASF GitHub Bot
Created on: 28/Mar/19 21:55
Start Date: 28/Mar/19 21:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #648: HDDS-1340. Add 
List Containers API for Recon
URL: https://github.com/apache/hadoop/pull/648#issuecomment-477785692
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1080 | trunk passed |
   | +1 | compile | 40 | trunk passed |
   | +1 | checkstyle | 15 | trunk passed |
   | +1 | mvnsite | 26 | trunk passed |
   | +1 | shadedclient | 742 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 37 | trunk passed |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 21 | the patch passed |
   | +1 | javac | 21 | the patch passed |
   | -0 | checkstyle | 11 | hadoop-ozone/ozone-recon: The patch generated 3 new 
+ 0 unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 844 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 34 | ozone-recon in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3146 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/648 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 8508d8bea447 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4cceeb2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/artifact/out/diff-checkstyle-hadoop-ozone_ozone-recon.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/testReport/ |
   | Max. process+thread count | 305 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-recon U: hadoop-ozone/ozone-recon |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-648/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220273)
Time Spent: 2.5h  (was: 2h 20m)

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804336#comment-16804336
 ] 

Bharat Viswanadham edited comment on HDDS-1350 at 3/28/19 9:53 PM:
---

Thank You [~xyao] for the contribution.

[~shwetayakkali] I have assigned the Jira back to Xiaoyu, as there is a pull 
request already available for this.

I have committed this to the trunk.

 


was (Author: bharatviswa):
Thank You [~xyao] for the contribution.

[~shwetayakkali] I have assigned the Jira back to Xiaoyu, as there is a pull 
request already available provided.

I have committed this to the trunk.

 

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1350:
-
Fix Version/s: 0.5.0

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1350:
-
Resolution: Fixed
  Assignee: Xiaoyu Yao  (was: Shweta)
Status: Resolved  (was: Patch Available)

Thank You [~xyao] for the contribution.

[~shwetayakkali] I have assigned the Jira back to Xiaoyu, as there is a pull 
request already available provided.

I have committed this to the trunk.

 

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?focusedWorklogId=220269=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220269
 ]

ASF GitHub Bot logged work on HDDS-1350:


Author: ASF GitHub Bot
Created on: 28/Mar/19 21:50
Start Date: 28/Mar/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #656: 
HDDS-1350. Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
URL: https://github.com/apache/hadoop/pull/656
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220269)
Time Spent: 40m  (was: 0.5h)

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?focusedWorklogId=220268=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220268
 ]

ASF GitHub Bot logged work on HDDS-1350:


Author: ASF GitHub Bot
Created on: 28/Mar/19 21:50
Start Date: 28/Mar/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #656: HDDS-1350. Fix 
checkstyle issue in TestDatanodeStateMachine. Contribu…
URL: https://github.com/apache/hadoop/pull/656#issuecomment-477784196
 
 
   +1 LGTM.
   I will commit this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220268)
Time Spent: 0.5h  (was: 20m)

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804333#comment-16804333
 ] 

Hadoop QA commented on HDDS-1351:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} ozone-0.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} ozone-0.4 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
1s{color} | {color:red} tools in ozone-0.4 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} tools in ozone-0.4 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} ozone-0.4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/660 |
| JIRA Issue | HDDS-1351 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 68aebaf3683f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | ozone-0.4 / f2dee89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-compile-hadoop-ozone_tools.txt
 |
| mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-mvnsite-hadoop-ozone_tools.txt
 |
| mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvninstall-hadoop-ozone_tools.txt
 |
| compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt
 |
| javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt
 |
| mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvnsite-hadoop-ozone_tools.txt
 |
| unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-unit-hadoop-ozone_tools.txt
 |
|  

[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220267=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220267
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 21:45
Start Date: 28/Mar/19 21:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660#issuecomment-477782897
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ ozone-0.4 Compile Tests _ |
   | +1 | mvninstall | 1048 | ozone-0.4 passed |
   | -1 | compile | 61 | tools in ozone-0.4 failed. |
   | -1 | mvnsite | 27 | tools in ozone-0.4 failed. |
   | +1 | shadedclient | 1778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 18 | ozone-0.4 passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 22 | tools in the patch failed. |
   | -1 | compile | 25 | tools in the patch failed. |
   | -1 | javac | 25 | tools in the patch failed. |
   | -1 | mvnsite | 22 | tools in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 727 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 22 | tools in the patch failed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 2813 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/660 |
   | JIRA Issue | HDDS-1351 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 68aebaf3683f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4 / f2dee89 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-compile-hadoop-ozone_tools.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/branch-mvnsite-hadoop-ozone_tools.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvninstall-hadoop-ozone_tools.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-compile-hadoop-ozone_tools.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-mvnsite-hadoop-ozone_tools.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/artifact/out/patch-unit-hadoop-ozone_tools.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/testReport/ |
   | Max. process+thread count | 441 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-660/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220267)
Time Spent: 40m  (was: 0.5h)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects 

[jira] [Commented] (HDFS-14390) Provide kerberos support for AliasMap service used by Provided storage

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804331#comment-16804331
 ] 

Hadoop QA commented on HDFS-14390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964055/HDFS-14390.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 59d88d55e330 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 49b02d4 |
| 

[jira] [Assigned] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDDS-1350:


Assignee: Shweta  (was: Xiaoyu Yao)

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804324#comment-16804324
 ] 

Hadoop QA commented on HDDS-1351:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/659 |
| JIRA Issue | HDDS-1351 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 0fe4c6c24cf7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4cceeb2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/testReport/ |
| Max. process+thread count | 2388 (vs. ulimit of 5500) |
| modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> 

[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220260=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220260
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 21:34
Start Date: 28/Mar/19 21:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #659: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/659#issuecomment-49668
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1027 | trunk passed |
   | +1 | compile | 58 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 1789 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | mvnsite | 23 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 63 | tools in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 2821 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/659 |
   | JIRA Issue | HDDS-1351 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 0fe4c6c24cf7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4cceeb2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/testReport/ |
   | Max. process+thread count | 2388 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-659/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220260)
Time Spent: 0.5h  (was: 20m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at 

[jira] [Commented] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities

2019-03-28 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804316#comment-16804316
 ] 

Hadoop QA commented on HDDS-1312:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 24s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 38s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2596/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1312 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964073/HDDS-1312.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 5b1ebc6e0820 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 4cceeb2 |
| maven | 

[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1351:
-
Target Version/s: 0.4.0, 0.5.0  (was: 0.5.0)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1351:
-
Target Version/s: 0.5.0  (was: 0.4.0, 0.5.0)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14397) Backport HADOOP-15684 to branch-2

2019-03-28 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14397:

Status: Patch Available  (was: Open)

> Backport HADOOP-15684 to branch-2
> -
>
> Key: HDFS-14397
> URL: https://issues.apache.org/jira/browse/HDFS-14397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-14397-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HADOOP-15684.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14397) Backport HADOOP-15684 to branch-2

2019-03-28 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14397:

Attachment: HDFS-14397-branch-2.000.patch

> Backport HADOOP-15684 to branch-2
> -
>
> Key: HDFS-14397
> URL: https://issues.apache.org/jira/browse/HDFS-14397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-14397-branch-2.000.patch
>
>
> As multi-SBN feature is already backported to branch-2, this is a follow-up 
> to backport HADOOP-15684.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?focusedWorklogId=220241=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220241
 ]

ASF GitHub Bot logged work on HDDS-1350:


Author: ASF GitHub Bot
Created on: 28/Mar/19 20:52
Start Date: 28/Mar/19 20:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #656: HDDS-1350. Fix 
checkstyle issue in TestDatanodeStateMachine. Contribu…
URL: https://github.com/apache/hadoop/pull/656#issuecomment-477766400
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1144 | trunk passed |
   | +1 | compile | 48 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 32 | trunk passed |
   | +1 | shadedclient | 762 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 54 | trunk passed |
   | +1 | javadoc | 27 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 14 | hadoop-hdds/container-service: The patch generated 
0 new + 0 unchanged - 1 fixed = 0 total (was 1) |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 58 | the patch passed |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 58 | container-service in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3265 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/656 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 77e8e53a25b3 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4cceeb2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-656/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220241)
Time Spent: 20m  (was: 10m)

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Affects Version/s: 0.5.0
 Target Version/s: 0.4.0, 0.5.0
   Status: Patch Available  (was: In Progress)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0, 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220236=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220236
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 20:28
Start Date: 28/Mar/19 20:28
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #660: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/660
 
 
   ## What changes were proposed in this pull request?
   
   Add `jaxb-core` and some `javax` artifacts to `hadoop-ozone-tools` 
dependencies to make `ozone genconf` work with JDK11, too.
   
   https://issues.apache.org/jira/browse/HDDS-1351
   
   ## How was this patch tested?
   
   ```
   $ mvn -Phdds -DskipTests -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade 
-am -pl :hadoop-ozone-dist clean package
   $ cd $(git rev-parse 
--show-toplevel)/hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozone
   $ docker-compose run datanode ozone genconf /tmp
   ozone-site.xml has been generated at /tmp
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220236)
Time Spent: 20m  (was: 10m)

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1351:
-
Labels: pull-request-available  (was: )

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?focusedWorklogId=220235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220235
 ]

ASF GitHub Bot logged work on HDDS-1351:


Author: ASF GitHub Bot
Created on: 28/Mar/19 20:12
Start Date: 28/Mar/19 20:12
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #659: [HDDS-1351] 
NoClassDefFoundError when running ozone genconf
URL: https://github.com/apache/hadoop/pull/659
 
 
   ## What changes were proposed in this pull request?
   
   Add `jaxb-core` to `hadoop-ozone-tools` dependencies to make `ozone genconf` 
work again.
   
   https://issues.apache.org/jira/browse/HDDS-1351
   
   ## How was this patch tested?
   
   ```
   $ mvn -Phdds -DskipTests -Dmaven.javadoc.skip=true -Pdist -Dtar -DskipShade 
-am -pl :hadoop-ozone-dist clean package
   $ cd $(git rev-parse 
--show-toplevel)/hadoop-ozone/dist/target/ozone-*-SNAPSHOT/compose/ozones3
   $ docker-compose run datanode ozone genconf /tmp
   ozone-site.xml has been generated at /tmp
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220235)
Time Spent: 10m
Remaining Estimate: 0h

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1351.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Attachment: HDDS-1351.001.patch

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
> Attachments: HDDS-1351.001.patch
>
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14395) Remove WARN Logging From Interrupts

2019-03-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804245#comment-16804245
 ] 

Hudson commented on HDFS-14395:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16300/])
HDFS-14395. Remove WARN Logging From Interrupts. Contributed by David (gifuma: 
rev 49b02d4a9bf9beac19f716488348ea4e30563ff4)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> Remove WARN Logging From Interrupts
> ---
>
> Key: HDFS-14395
> URL: https://issues.apache.org/jira/browse/HDFS-14395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14395.1.patch
>
>
> * Remove WARN level logging for interrupts which are simply ignored anyway.
> *  In places where interrupt is not ignored, ensure that the Thread's 
> interrupt status is continued
> * Small logging updates
> This class produces a lot of superfluous stack traces if it is interrupted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14393) Refactor FsDatasetCache for SCM cache implementation

2019-03-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804246#comment-16804246
 ] 

Hudson commented on HDFS-14393:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16300/])
HDFS-14393. Refactor FsDatasetCache for SCM cache implementation. (rakeshr: rev 
f3f51284d57ef2e0c7e968b6eea56eab578f7e93)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCacheRevocation.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MemoryMappableBlockLoader.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MemoryCacheStats.java
* (delete) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlockLoader.java


> Refactor FsDatasetCache for SCM cache implementation
> 
>
> Key: HDFS-14393
> URL: https://issues.apache.org/jira/browse/HDFS-14393
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14393-001.patch, HDFS-14393-002.patch, 
> HDFS-14393-003.patch
>
>
> This jira sub-task is to make FsDatasetCache more cleaner to plugin DRAM and 
> PMem implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804247#comment-16804247
 ] 

Hudson commented on HDDS-1318:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16300/])
HDDS-1318. Fix MalformedTracerStateStringException on DN logs. (github: rev 
ca5e4ce0367228bc0ac032c4654d3deb7493316b)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/tracing/package-info.java
* (add) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/tracing/TestStringCodec.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainerWithTLS.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestContainerReplication.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java


> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> 

[jira] [Commented] (HDDS-1293) ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException

2019-03-28 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804248#comment-16804248
 ] 

Hudson commented on HDDS-1293:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16300 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16300/])
HDDS-1293. ExcludeList#getProtoBuf throws (shashikant: rev 
ac4010bb22bd9e29dc5be74570fe3fda6c933032)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java


> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException
> -
>
> Key: HDDS-1293
> URL: https://issues.apache.org/jira/browse/HDDS-1293
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0, 0.5.0
>
> Attachments: HDDS-1293.000.patch, HDDS-1293.001.patch
>
>
> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because 
> getProtoBuf uses parallelStreams
> {code}
> 2019-03-17 16:24:35,774 INFO  retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:log(411)) - 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  3
>   at java.util.ArrayList.add(ArrayList.java:463)
>   at 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
>   at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870)
>   at 
> java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467)
>   at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324)
>   at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
>   at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89)
>   at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100)
>   at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
>   at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86)
>   at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at 

[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804238#comment-16804238
 ] 

Doroszlai, Attila commented on HDDS-1351:
-

[~xyao], try eg. the {{ozones3}} jdk8-based compose file to reproduce the issue.

However, you are right, jdk11 makes it worse, because with that {{jaxb-api}} is 
missing, and it fails even earlier:

{code:title=docker exec ozone_datanode_1 ozone genconf /tmp}
Error: Unable to initialize main class 
org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException
{code}

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1351:

Affects Version/s: 0.4.0

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.4.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1293) ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException

2019-03-28 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1293:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~msingh] for the review. I have committed this change to trunk as well 
as ozone-0.4 branch.

> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException
> -
>
> Key: HDDS-1293
> URL: https://issues.apache.org/jira/browse/HDDS-1293
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0, 0.5.0
>
> Attachments: HDDS-1293.000.patch, HDDS-1293.001.patch
>
>
> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because 
> getProtoBuf uses parallelStreams
> {code}
> 2019-03-17 16:24:35,774 INFO  retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:log(411)) - 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  3
>   at java.util.ArrayList.add(ArrayList.java:463)
>   at 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
>   at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870)
>   at 
> java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467)
>   at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324)
>   at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
>   at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89)
>   at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100)
>   at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
>   at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86)
>   at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>   at 

[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-28 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804220#comment-16804220
 ] 

Rakesh R commented on HDFS-14355:
-

[~PhiloHe], HDFS-14393 sub-task has been resolved, please rebase your patch 
based on the interface changes.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch, HDFS-14355.004.patch, 
> HDFS-14355.005.patch, HDFS-14355.006.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14393) Refactor FsDatasetCache for SCM cache implementation

2019-03-28 Thread Rakesh R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-14393:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I have committed to trunk. Thanks [~umamaheswararao] and [~PhiloHe] for the 
reviews!

> Refactor FsDatasetCache for SCM cache implementation
> 
>
> Key: HDFS-14393
> URL: https://issues.apache.org/jira/browse/HDFS-14393
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14393-001.patch, HDFS-14393-002.patch, 
> HDFS-14393-003.patch
>
>
> This jira sub-task is to make FsDatasetCache more cleaner to plugin DRAM and 
> PMem implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities

2019-03-28 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804212#comment-16804212
 ] 

Shashikant Banerjee commented on HDDS-1312:
---

Thanks [~jnp] for the review. Added some new tests in v2 patch.

> Add more unit tests to verify BlockOutputStream functionalities
> ---
>
> Key: HDDS-1312
> URL: https://issues.apache.org/jira/browse/HDDS-1312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: HDDS-1312.000.patch, HDDS-1312.001.patch
>
>
> This jira aims to add more unit test coverage for BlockOutputStream 
> functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1312) Add more unit tests to verify BlockOutputStream functionalities

2019-03-28 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1312:
--
Attachment: HDDS-1312.001.patch

> Add more unit tests to verify BlockOutputStream functionalities
> ---
>
> Key: HDDS-1312
> URL: https://issues.apache.org/jira/browse/HDDS-1312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Attachments: HDDS-1312.000.patch, HDDS-1312.001.patch
>
>
> This jira aims to add more unit test coverage for BlockOutputStream 
> functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1318:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks Ajay for the review. I've committed the patch to trunk and ozone-0.4

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=220214=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220214
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 28/Mar/19 19:01
Start Date: 28/Mar/19 19:01
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #641: HDDS-1318. 
Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220214)
Time Spent: 3h  (was: 2h 50m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1351) NoClassDefFoundError when running ozone genconf

2019-03-28 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16804199#comment-16804199
 ] 

Xiaoyu Yao commented on HDDS-1351:
--

[~adoroszlai], the docker image you are using is based on JDK11.

If you use any other docker-compose with JDK8 tag in the image, this won't be 
an issue.  

> NoClassDefFoundError when running ozone genconf
> ---
>
> Key: HDDS-1351
> URL: https://issues.apache.org/jira/browse/HDDS-1351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {{ozone genconf}} fails due to incomplete classpath.
> Steps to reproduce:
> # [build and run 
> Ozone|https://cwiki.apache.org/confluence/display/HADOOP/Development+cluster+with+docker]
> # run {{ozone genconf}} in one of the containers:
> {code}
> $ ozone genconf /tmp
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> com/sun/xml/bind/v2/model/annotation/AnnotationReader
>   at java.lang.ClassLoader.defineClass1(Native Method)
> ...
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:242)
>   at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234)
>   at javax.xml.bind.ContextFinder.find(ContextFinder.java:441)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641)
>   at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584)
>   at 
> org.apache.hadoop.hdds.conf.OzoneConfiguration.readPropertyFromXml(OzoneConfiguration.java:57)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.generateConfigurations(GenerateOzoneRequiredConfigurations.java:103)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:73)
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.call(GenerateOzoneRequiredConfigurations.java:50)
>   at picocli.CommandLine.execute(CommandLine.java:919)
> ...
>   at 
> org.apache.hadoop.ozone.genconf.GenerateOzoneRequiredConfigurations.main(GenerateOzoneRequiredConfigurations.java:68)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.xml.bind.v2.model.annotation.AnnotationReader
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 36 more
> {code}
> {{AnnotationReader}} is in {{jaxb-core}} jar, which is not in the 
> {{hadoop-ozone-tools}} classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1350:
-
Status: Patch Available  (was: Open)

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1350) Fix checkstyle issue in TestDatanodeStateMachine

2019-03-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1350?focusedWorklogId=220212=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-220212
 ]

ASF GitHub Bot logged work on HDDS-1350:


Author: ASF GitHub Bot
Created on: 28/Mar/19 18:53
Start Date: 28/Mar/19 18:53
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #656: HDDS-1350. 
Fix checkstyle issue in TestDatanodeStateMachine. Contribu…
URL: https://github.com/apache/hadoop/pull/656
 
 
   …ted by Xiaoyu Yao.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 220212)
Time Spent: 10m
Remaining Estimate: 0h

> Fix checkstyle issue in TestDatanodeStateMachine
> 
>
> Key: HDDS-1350
> URL: https://issues.apache.org/jira/browse/HDDS-1350
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following tests are FAILED:
>  
> [checkstyle]: checkstyle check is failed 
> ([https://ci.anzix.net/job/ozone-nightly/44/checkstyle/])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >