[ https://issues.apache.org/jira/browse/HDFS-13108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361307#comment-16361307 ]
genericqa commented on HDFS-13108: ---------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} hadoop-tools/hadoop-ozone generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s{color} | {color:green} hadoop-ozone in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-ozone | | | Dead store to path in org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(URI, Configuration) At OzoneFileSystem.java:org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(URI, Configuration) At OzoneFileSystem.java:[line 104] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b | | JIRA Issue | HDFS-13108 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12910269/HDFS-13108-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3d7761315f7 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / f3d07ef | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/23037/artifact/out/new-findbugs-hadoop-tools_hadoop-ozone.html | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23037/testReport/ | | Max. process+thread count | 422 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-ozone U: hadoop-tools/hadoop-ozone | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23037/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: OzoneFileSystem: Simplified url schema for Ozone File System > ------------------------------------------------------------------- > > Key: HDFS-13108 > URL: https://issues.apache.org/jira/browse/HDFS-13108 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone > Affects Versions: HDFS-7240 > Reporter: Elek, Marton > Assignee: Elek, Marton > Priority: Major > Attachments: HDFS-13108-HDFS-7240.001.patch, > HDFS-13108-HDFS-7240.002.patch > > > A. Current state > > 1. The datanode host / bucket /volume should be defined in the defaultFS (eg. > o3://datanode:9864/test/bucket1) > 2. The root file system points to the bucket (eg. 'dfs -ls /' lists all the > keys from the bucket1) > It works very well, but there are some limitations. > B. Problem one > The current code doesn't support fully qualified locations. For example 'dfs > -ls o3://datanode:9864/test/bucket1/dir1' is not working. > C.) Problem two > I tried to fix the previous problem, but it's not trivial. The biggest > problem is that there is a Path.makeQualified call which could transform > unqualified url to qualified url. This is part of the Path.java so it's > common for all the Hadoop file systems. > In the current implementations it qualifies an url with keeping the schema > (eg. o3:// ) and authority (eg: datanode: 9864) from the defaultfs and use > the relative path as the end of the qualified url. For example: > makeQualfied(defaultUri=o3://datanode:9864/test/bucket1, path=dir1/file) will > return o3://datanode:9864/dir1/file which is obviously wrong (the good would > be o3://datanode:9864/TEST/BUCKET1/dir1/file). I tried to do a workaround > with using a custom makeQualified in the Ozone code and it worked from > command line but couldn't work with Spark which use the Hadoop api and the > original makeQualified path. > D.) Solution > We should support makeQualified calls, so we can use any path in the > defaultFS. > > I propose to use a simplified schema as o3://bucket.volume/ > This is similar to the s3a format where the pattern is s3a://bucket.region/ > We don't need to set the hostname of the datanode (or ksm in case of service > discovery) but it would be configurable with additional hadoop configuraion > values such as fs.o3.bucket.buckename.volumename.address=http://datanode:9864 > (this is how the s3a works today, as I know). > We also need to define restrictions for the volume names (in our case it > should not include dot any more). > ps: some spark output > 2018-02-03 18:43:04 WARN Client:66 - Neither spark.yarn.jars nor > spark.yarn.archive is set, falling back to uploading libraries under > SPARK_HOME. > 2018-02-03 18:43:05 INFO Client:54 - Uploading resource > file:/tmp/spark-03119be0-9c3d-440c-8e9f-48c692412ab5/__spark_libs__2440448967844904444.zip > -> > o3://datanode:9864/user/hadoop/.sparkStaging/application_1517611085375_0001/__spark_libs__2440448967844904444.zip > My default fs was o3://datanode:9864/test/bucket1, but spark qualified the > name of the home directory. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org