[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325472#comment-16325472 ] Eric Yang commented on HDFS-12990: -- [~chris.douglas] There are plenty of times that Hadoop code change failed with 10+ year convention because we rationalized into believing that no harm will be done by the change. For example, CRC32C as default checksum was introduced in a minor release, and caused people to unable to rollback during upgrade fails. Datanode layoutVersion mismatch between Hadoop 2.0.5 to 2.2.0 releases. There are plenty of time that things don't go as planned. We get over it by working on the problem instead of reverting it. Many customers clusters were caught by surprise when incompatible changes were introduced in minor or maintenance releases. Any good change takes time and planning. I am not certain this NN RPC change in short window can restore order as quickly as anyone have hoped. I also don't like the port number, but I don't like the risk that someone might be testing Hadoop 3.0.0 release, and decided to put 3.0.1 on production at random future time to find that that we made an incompatible change for NN rpc in a future event that we can not predict. For Hadoop 3.0.0 release to stick out like a sore thumb is not a good way to address this issue. > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325458#comment-16325458 ] Chris Douglas commented on HDFS-12990: -- We wrote the compatibility guidelines _to avoid breaking users_. If there was any benefit to this change, then we could discuss tradeoffs, but there are none. By retaining this change, we choose to add a bunch of tedious, rote repairs to sites trying to upgrade from 2.x. It's not challenging to work around this, but it's annoying and avoidable. bq. Maybe we should look at NN RPC port as a new feature to ensure the down stream projects really tested with Hadoop 3 instead of gambling on compatibility. That's a creative way to look at it, but changing the NN port doesn't achieve that in any meaningful sense. Even if this does require changes, they are superficial. More to the point, we have *no* interest in second-guessing other projects' testing practices, or challenging whether they "really certified with Hadoop 3". That's not merely outside our charter, it is antithetical to it. bq. This is greatest challenge to Hadoop policy. bq. I know that forever compatibility is not sustainable and it only cost more in the long run. We should move on and do something better for Hadoop. Are we looking at the same JIRA? This is not like moving to YARN, which required a lot of work to shake out its incompatible changes, but we won a better architecture by it. This upends a 10+ year convention, the project and its users gain _nothing_ by it, and it is trivial to fix. > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325418#comment-16325418 ] genericqa commented on HDFS-12919: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-3 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 19s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} branch-3 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}177m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 | | JIRA Issue | HDFS-12919 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906007/HDFS-12919-branch-3.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 563c10c129e8 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3 / d3fbcd9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HDFS-13017) Block Storage: implement simple iscsi discovery in jscsi server
[ https://issues.apache.org/jira/browse/HDFS-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325395#comment-16325395 ] Chen Liang commented on HDFS-13017: --- Thanks [~elek] for working on this! v001 patch LGTM, just one thing. Is it possible to add a unit test for the new API? Also could you please verify the failed tests are not related? > Block Storage: implement simple iscsi discovery in jscsi server > --- > > Key: HDFS-13017 > URL: https://issues.apache.org/jira/browse/HDFS-13017 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-13017-HDFS-7240.001.patch > > > The current jscsi server doesn't support iscsi discovery. > To use jscsi server as a kubernetes storage backend we need the discovery. > jScsi supports it we need just override a method and add an additional call > to the server protocl to get the list of the available cblocks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12919: --- Attachment: HDFS-12919-branch-3.003.patch > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Attachments: HDFS-12919-branch-3.001.patch, > HDFS-12919-branch-3.002.patch, HDFS-12919-branch-3.003.patch, > HDFS-12919.000.patch, HDFS-12919.001.patch, HDFS-12919.002.patch, > HDFS-12919.003.patch, HDFS-12919.004.patch, HDFS-12919.005.patch, > HDFS-12919.006.patch, HDFS-12919.007.patch, HDFS-12919.008.patch, > HDFS-12919.009.patch, HDFS-12919.010.patch, HDFS-12919.011.patch, > HDFS-12919.012.patch, HDFS-12919.013.patch, HDFS-12919.013.patch, > HDFS-12919.014.patch, HDFS-12919.015.patch, HDFS-12919.016.patch, > HDFS-12919.017.patch, HDFS-12919.018.patch, HDFS-12919.019.patch, > HDFS-12919.020.patch, HDFS-12919.021.patch, HDFS-12919.022.patch, > HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325343#comment-16325343 ] genericqa commented on HDFS-12919: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-3 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 9s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 53s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} branch-3 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 4s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 | | JIRA Issue | HDFS-12919 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906003/HDFS-12919-branch-3.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b14eb93d19bb 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3 / d3fbcd9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall |
[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12919: --- Attachment: HDFS-12919-branch-3.002.patch > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Attachments: HDFS-12919-branch-3.001.patch, > HDFS-12919-branch-3.002.patch, HDFS-12919.000.patch, HDFS-12919.001.patch, > HDFS-12919.002.patch, HDFS-12919.003.patch, HDFS-12919.004.patch, > HDFS-12919.005.patch, HDFS-12919.006.patch, HDFS-12919.007.patch, > HDFS-12919.008.patch, HDFS-12919.009.patch, HDFS-12919.010.patch, > HDFS-12919.011.patch, HDFS-12919.012.patch, HDFS-12919.013.patch, > HDFS-12919.013.patch, HDFS-12919.014.patch, HDFS-12919.015.patch, > HDFS-12919.016.patch, HDFS-12919.017.patch, HDFS-12919.018.patch, > HDFS-12919.019.patch, HDFS-12919.020.patch, HDFS-12919.021.patch, > HDFS-12919.022.patch, HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325312#comment-16325312 ] genericqa commented on HDFS-12919: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-3 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 4s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 19s{color} | {color:green} branch-3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} branch-3 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 10s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 | | JIRA Issue | HDFS-12919 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12906000/HDFS-12919-branch-3.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f07e18b64e94 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3 / d3fbcd9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall |
[jira] [Updated] (HDFS-12919) RBF: Support erasure coding methods in RouterRpcServer
[ https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12919: --- Attachment: HDFS-12919-branch-3.001.patch > RBF: Support erasure coding methods in RouterRpcServer > -- > > Key: HDFS-12919 > URL: https://issues.apache.org/jira/browse/HDFS-12919 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Critical > Labels: RBF > Attachments: HDFS-12919-branch-3.001.patch, HDFS-12919.000.patch, > HDFS-12919.001.patch, HDFS-12919.002.patch, HDFS-12919.003.patch, > HDFS-12919.004.patch, HDFS-12919.005.patch, HDFS-12919.006.patch, > HDFS-12919.007.patch, HDFS-12919.008.patch, HDFS-12919.009.patch, > HDFS-12919.010.patch, HDFS-12919.011.patch, HDFS-12919.012.patch, > HDFS-12919.013.patch, HDFS-12919.013.patch, HDFS-12919.014.patch, > HDFS-12919.015.patch, HDFS-12919.016.patch, HDFS-12919.017.patch, > HDFS-12919.018.patch, HDFS-12919.019.patch, HDFS-12919.020.patch, > HDFS-12919.021.patch, HDFS-12919.022.patch, HDFS-12919.023.patch > > > MAPREDUCE-6954 started to tune the erasure coding settings for staging files. > However, the {{Router}} does not support this operation and throws: > {code} > 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002 > org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException): > Operation "setErasureCodingPolicy" is not supported > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13013) Fix closeContainer API with the right container state change
[ https://issues.apache.org/jira/browse/HDFS-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325285#comment-16325285 ] genericqa commented on HDFS-13013: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}204m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.ksm.TestKeySpaceManager | | | hadoop.ozone.tools.TestCorona | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.ozone.web.client.TestKeys | | |
[jira] [Updated] (HDFS-11539) Block Storage : configurable max cache size
[ https://issues.apache.org/jira/browse/HDFS-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-11539: - Component/s: (was: hdfs) ozone > Block Storage : configurable max cache size > --- > > Key: HDFS-11539 > URL: https://issues.apache.org/jira/browse/HDFS-11539 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Mukul Kumar Singh > > Currently, there is no max size limit for CBlock's local cache. In theory, > this means the cache can potentially increase unbounded. We should make the > max size configurable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12990) Change default NameNode RPC port back to 8020
[ https://issues.apache.org/jira/browse/HDFS-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325225#comment-16325225 ] Eric Yang commented on HDFS-12990: -- Maybe we should look at NN RPC port as a new feature to ensure the down stream projects really tested with Hadoop 3 instead of gambling on compatibility. In Hadoop bylaws, it doesn't mention compatibility between major versions, only minor version within the same major version. This bylaw is challenged by this JIRA because major incompatible change have been made. This is greatest challenge to Hadoop policy. If this policy doesn't hold up when challenged, then I would suggest to revise the policy first. Most complains are knee jerk reaction toward uncertainty toward certifying a new product version. I don't think this issue is worthy of revising Hadoop policy. We should not try to rationalize this is a bug to bend the policy, otherwise, it would be disrespectful to Hadoop policy, and encourage others to break the rules. We do not know if down stream projects depend on other ports that were changed. Therefore, I would recommend for downstream projects which has hard coded 8020 ports to make new releases that have properly certified with Hadoop 3. I found other projects like Chukwa and Ambari are both immune to NN rpc port change problem. Code with good design are most likely immune to this problem. If we committed this change, future major release might need to break compatibility again to shake away bad design flaws. We will be unable to do that for sake of compatibility. While I admire Microsoft's ability to enable upgrade from Windows 1.0 to Windows 10, but Windows still don't preserve all my theme and coloring between Window versions, or make my old scanner work. Can Hadoop make the same kind of commitment without making huge investment? Having seen multi-generation of programmers retired in IBM. I know that forever compatibility is not sustainable and it only cost more in the long run. We should move on and do something better for Hadoop. Please respect my -1 vote because I believe in the wisdom of current version scheme, and port change ensures the downstream projects really certified with Hadoop 3. > Change default NameNode RPC port back to 8020 > - > > Key: HDFS-12990 > URL: https://issues.apache.org/jira/browse/HDFS-12990 > Project: Hadoop HDFS > Issue Type: Task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Critical > Attachments: HDFS-12990.01.patch > > > In HDFS-9427 (HDFS should not default to ephemeral ports), we changed all > default ports to ephemeral ports, which is very appreciated by admin. As part > of that change, we also modified the NN RPC port from the famous 8020 to > 9820, to be closer to other ports changed there. > With more integration going on, it appears that all the other ephemeral port > changes are fine, but the NN RPC port change is painful for downstream on > migrating to Hadoop 3. Some examples include: > # Hive table locations pointing to hdfs://nn:port/dir > # Downstream minicluster unit tests that assumed 8020 > # Oozie workflows / downstream scripts that used 8020 > This isn't a problem for HA URLs, since that does not include the port > number. But considering the downstream impact, instead of requiring all of > them change their stuff, it would be a way better experience to leave the NN > port unchanged. This will benefit Hadoop 3 adoption and ease unnecessary > upgrade burdens. > It is of course incompatible, but giving 3.0.0 is just out, IMO it worths to > switch the port back. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13016) globStatus javadoc refers to glob pattern as "regular expression"
[ https://issues.apache.org/jira/browse/HDFS-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325170#comment-16325170 ] Hudson commented on HDFS-13016: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13495 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13495/]) HDFS-13016. globStatus javadoc refers to glob pattern as "regular (arp: rev 7016dd44e0975274856dc19f19815123c4b2a352) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java > globStatus javadoc refers to glob pattern as "regular expression" > - > > Key: HDFS-13016 > URL: https://issues.apache.org/jira/browse/HDFS-13016 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, hdfs >Reporter: Ryanne Dolan >Assignee: Mukul Kumar Singh >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HDFS-13016.001.patch > > > Glob patterns are not regular expressions. Both are well-defined and > universally understood concepts. The term "regular expression" is misapplied > here. > The method name "globStatus" indicates that pathPattern should be a glob, but > the javadoc says pathPattern is a regular expression. The documentation goes > on to describe the accepted format of pathPattern and clearly describes globs. > The term "regular expression" should not be associated with pathPattern. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13016) globStatus javadoc refers to glob pattern as "regular expression"
[ https://issues.apache.org/jira/browse/HDFS-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13016: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) I've committed this. Thanks for fixing this Mukul, thanks for the bug report Ryanne. > globStatus javadoc refers to glob pattern as "regular expression" > - > > Key: HDFS-13016 > URL: https://issues.apache.org/jira/browse/HDFS-13016 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, hdfs >Reporter: Ryanne Dolan >Assignee: Mukul Kumar Singh >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HDFS-13016.001.patch > > > Glob patterns are not regular expressions. Both are well-defined and > universally understood concepts. The term "regular expression" is misapplied > here. > The method name "globStatus" indicates that pathPattern should be a glob, but > the javadoc says pathPattern is a regular expression. The documentation goes > on to describe the accepted format of pathPattern and clearly describes globs. > The term "regular expression" should not be associated with pathPattern. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13013) Fix closeContainer API with the right container state change
[ https://issues.apache.org/jira/browse/HDFS-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-13013: -- Attachment: HDFS-13013-HDFS-7240.002.patch > Fix closeContainer API with the right container state change > > > Key: HDFS-13013 > URL: https://issues.apache.org/jira/browse/HDFS-13013 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-13013-HDFS-7240.001.patch, > HDFS-13013-HDFS-7240.002.patch > > > SCMCLI close container command is based on ContainerMapping#closeContainer, > which is based on the state machine (open->close) before HDFS-12980. > HDFS-12980 changes the container state machine. A container has to be > finalized into closing state before closed. (open->closing->closed). This > ticket is opened to fix ContainerMapping#closeContainer to match the new > state machine. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12636) Ozone: OzoneFileSystem: Implement seek functionality for rpc client
[ https://issues.apache.org/jira/browse/HDFS-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325075#comment-16325075 ] genericqa commented on HDFS-12636: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 21s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 48s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 40s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 6s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s{color} | {color:green} hadoop-ozone in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}236m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.ozone.client.rpc.TestOzoneRpcClient | | | hadoop.ozone.ksm.TestKeySpaceManager | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.ozone.tools.TestCorona | | | hadoop.ozone.scm.TestSCMCli | | | hadoop.ozone.web.client.TestKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue |